Dec 12 16:15:05 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 16:15:05 crc kubenswrapper[5116]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.819764 5116 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822197 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822222 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822227 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822231 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822234 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822238 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822243 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822247 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822251 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822256 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822261 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822265 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822271 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822276 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822281 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822286 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822290 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822294 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822299 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822302 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822306 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822310 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822314 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822318 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822321 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822325 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822329 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822333 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822337 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822340 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822344 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822348 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822351 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822355 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822359 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822363 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822367 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822370 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822373 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822377 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822381 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822385 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822388 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822391 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822395 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822399 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822402 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822408 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822411 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822415 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822418 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822424 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822429 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822432 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822436 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822439 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822444 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822448 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822452 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822456 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822459 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822462 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822466 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822470 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822474 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822478 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822481 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822485 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822489 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822492 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822495 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822498 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822502 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822506 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822509 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822513 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822516 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822520 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822524 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822529 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822532 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822565 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822570 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822573 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822577 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822580 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822968 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822976 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822980 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822986 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822992 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.822997 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823001 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823005 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823009 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823013 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823019 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823023 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823028 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823033 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823036 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823040 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823043 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823047 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823051 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823054 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823058 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823062 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823065 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823069 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823072 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823077 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823080 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823083 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823087 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823090 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823094 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823097 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823120 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823124 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823128 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823133 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823137 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823141 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823145 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823149 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823153 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823157 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823161 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823165 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823169 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823172 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823178 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823182 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823187 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823191 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823195 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823199 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823203 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823207 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823212 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823216 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823220 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823226 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823230 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823234 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823238 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823242 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823247 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823251 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823255 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823259 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823264 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823269 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823273 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823278 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823282 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823286 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823290 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823296 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823300 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823304 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823309 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823313 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823317 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823322 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823326 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823330 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823334 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823338 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823342 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.823346 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823596 5116 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823610 5116 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823621 5116 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823633 5116 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823640 5116 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823645 5116 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823651 5116 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823658 5116 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823663 5116 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823668 5116 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823673 5116 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823678 5116 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823684 5116 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823689 5116 flags.go:64] FLAG: --cgroup-root="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823694 5116 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823699 5116 flags.go:64] FLAG: --client-ca-file="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823704 5116 flags.go:64] FLAG: --cloud-config="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823709 5116 flags.go:64] FLAG: --cloud-provider="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823714 5116 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823721 5116 flags.go:64] FLAG: --cluster-domain="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823726 5116 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823731 5116 flags.go:64] FLAG: --config-dir="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823736 5116 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823741 5116 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823747 5116 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823753 5116 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823759 5116 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823764 5116 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823770 5116 flags.go:64] FLAG: --contention-profiling="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823775 5116 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823781 5116 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823786 5116 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823792 5116 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823800 5116 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823804 5116 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823812 5116 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823817 5116 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823822 5116 flags.go:64] FLAG: --enable-server="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823827 5116 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823837 5116 flags.go:64] FLAG: --event-burst="100" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823842 5116 flags.go:64] FLAG: --event-qps="50" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823847 5116 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823852 5116 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823857 5116 flags.go:64] FLAG: --eviction-hard="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823863 5116 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823868 5116 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823873 5116 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823878 5116 flags.go:64] FLAG: --eviction-soft="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823882 5116 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823887 5116 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823892 5116 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823897 5116 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823903 5116 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823908 5116 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823913 5116 flags.go:64] FLAG: --feature-gates="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823919 5116 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823923 5116 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823928 5116 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823933 5116 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823939 5116 flags.go:64] FLAG: --healthz-port="10248" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823944 5116 flags.go:64] FLAG: --help="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823949 5116 flags.go:64] FLAG: --hostname-override="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823953 5116 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823958 5116 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823963 5116 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823967 5116 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823972 5116 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823978 5116 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823983 5116 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823988 5116 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823995 5116 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.823999 5116 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824005 5116 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824010 5116 flags.go:64] FLAG: --kube-reserved="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824014 5116 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824019 5116 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824024 5116 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824029 5116 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824033 5116 flags.go:64] FLAG: --lock-file="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824037 5116 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824042 5116 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824046 5116 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824053 5116 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824057 5116 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824061 5116 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824065 5116 flags.go:64] FLAG: --logging-format="text" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824068 5116 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824072 5116 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824076 5116 flags.go:64] FLAG: --manifest-url="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824079 5116 flags.go:64] FLAG: --manifest-url-header="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824086 5116 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824089 5116 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824095 5116 flags.go:64] FLAG: --max-pods="110" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824099 5116 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824130 5116 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824135 5116 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824138 5116 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824142 5116 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824146 5116 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824151 5116 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824162 5116 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824166 5116 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824171 5116 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824176 5116 flags.go:64] FLAG: --pod-cidr="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824180 5116 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824188 5116 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824191 5116 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824195 5116 flags.go:64] FLAG: --pods-per-core="0" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824199 5116 flags.go:64] FLAG: --port="10250" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824204 5116 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824208 5116 flags.go:64] FLAG: --provider-id="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824213 5116 flags.go:64] FLAG: --qos-reserved="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824218 5116 flags.go:64] FLAG: --read-only-port="10255" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824223 5116 flags.go:64] FLAG: --register-node="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824228 5116 flags.go:64] FLAG: --register-schedulable="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824233 5116 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824241 5116 flags.go:64] FLAG: --registry-burst="10" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824245 5116 flags.go:64] FLAG: --registry-qps="5" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824249 5116 flags.go:64] FLAG: --reserved-cpus="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824253 5116 flags.go:64] FLAG: --reserved-memory="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824258 5116 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824262 5116 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824266 5116 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824270 5116 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824274 5116 flags.go:64] FLAG: --runonce="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824278 5116 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824283 5116 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824287 5116 flags.go:64] FLAG: --seccomp-default="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824291 5116 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824295 5116 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824299 5116 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824304 5116 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824310 5116 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824314 5116 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824318 5116 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824322 5116 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824325 5116 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824329 5116 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824333 5116 flags.go:64] FLAG: --system-cgroups="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824336 5116 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824343 5116 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824346 5116 flags.go:64] FLAG: --tls-cert-file="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824350 5116 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824355 5116 flags.go:64] FLAG: --tls-min-version="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824359 5116 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824362 5116 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824366 5116 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824370 5116 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824374 5116 flags.go:64] FLAG: --v="2" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824379 5116 flags.go:64] FLAG: --version="false" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824384 5116 flags.go:64] FLAG: --vmodule="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824389 5116 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824393 5116 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824486 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824490 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824494 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824498 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824501 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824505 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824509 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824513 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824516 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824519 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824522 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824528 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824531 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824535 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824538 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824543 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824546 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824549 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824553 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824556 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824559 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824562 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824566 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824569 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824573 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824576 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824579 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824582 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824586 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824589 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824592 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824596 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824599 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824604 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824608 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824611 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824616 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824620 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824625 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824629 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824632 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824635 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824639 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824646 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824649 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824652 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824656 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824659 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824662 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824666 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824669 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824672 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824675 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824678 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824682 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824685 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824688 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824692 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824695 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824698 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824703 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824707 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824710 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824713 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824716 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824719 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824722 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824725 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824728 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824732 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824735 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824739 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824743 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824746 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824749 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824754 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824757 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824760 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824763 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824767 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824770 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824773 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824777 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824780 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824783 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.824786 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.824941 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.842096 5116 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.842129 5116 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842175 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842181 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842185 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842193 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842197 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842201 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842205 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842209 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842212 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842217 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842222 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842226 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842230 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842234 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842237 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842241 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842244 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842247 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842251 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842256 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842262 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842266 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842271 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842275 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842279 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842283 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842287 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842291 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842296 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842300 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842304 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842308 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842313 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842317 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842321 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842325 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842329 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842333 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842336 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842340 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842344 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842348 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842352 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842356 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842360 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842364 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842368 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842371 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842375 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842379 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842382 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842387 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842390 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842394 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842398 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842402 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842405 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842408 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842412 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842416 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842419 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842423 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842426 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842430 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842433 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842437 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842441 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842444 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842448 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842451 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842455 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842459 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842462 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842466 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842470 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842473 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842478 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842482 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842486 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842489 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842494 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842498 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842502 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842506 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842510 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842513 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.842519 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842648 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842654 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842658 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842662 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842665 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842669 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842673 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842676 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842679 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842683 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842686 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842690 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842693 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842696 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842699 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842703 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842707 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842711 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842714 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842718 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842721 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842725 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842729 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842733 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842737 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842740 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842744 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842747 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842750 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842754 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842758 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842761 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842765 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842769 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842772 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842775 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842778 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842782 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842785 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842788 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842793 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842797 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842801 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842804 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842809 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842813 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842817 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842820 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842824 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842828 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842831 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842834 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842838 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842841 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842845 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842849 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842853 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842857 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842860 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842864 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842867 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842872 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842876 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842880 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842883 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842887 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842890 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842894 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842898 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842902 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842905 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842909 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842912 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842916 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842919 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842923 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842927 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842930 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842933 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842939 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842943 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842946 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842950 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842953 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842957 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:05 crc kubenswrapper[5116]: W1212 16:15:05.842960 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.842965 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.843266 5116 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.845908 5116 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.850040 5116 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.850199 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.850801 5116 server.go:1019] "Starting client certificate rotation" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.850938 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.851008 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.862302 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.864467 5116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.864670 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.873926 5116 log.go:25] "Validated CRI v1 runtime API" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.903209 5116 log.go:25] "Validated CRI v1 image API" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.905478 5116 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.912418 5116 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-16-08-48-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.912462 5116 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.941192 5116 manager.go:217] Machine: {Timestamp:2025-12-12 16:15:05.936598672 +0000 UTC m=+0.400810468 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649934336 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:26268ba2-1151-4589-80cf-5071a8d9f1b0 BootID:56a9ea63-479b-430c-9c05-3bf8c2deb332 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824967168 Type:vfs Inodes:4107658 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729990144 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107658 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:93:f7:11 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:93:f7:11 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0b:56:e2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:70:31:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:19:f8:f0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7c:de:27 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4a:32:90:ec:6a:d8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:14:89:4c:c8:bc Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649934336 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.941465 5116 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.941660 5116 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.944256 5116 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.944301 5116 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.944520 5116 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.944530 5116 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.944551 5116 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.945654 5116 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.946120 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.946273 5116 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.947899 5116 kubelet.go:491] "Attempting to sync node with API server" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.947925 5116 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.947942 5116 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.947956 5116 kubelet.go:397] "Adding apiserver pod source" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.947973 5116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.950653 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.950680 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.951635 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.951651 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.952965 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.952966 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.953286 5116 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.953465 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.953855 5116 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954309 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954333 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954341 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954349 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954356 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954363 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954371 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954379 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954387 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954399 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954408 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.954534 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.955719 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.955734 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.959211 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.967284 5116 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.967373 5116 server.go:1295] "Started kubelet" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.967577 5116 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.967739 5116 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.967823 5116 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.968471 5116 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 16:15:05 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.969904 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.969964 5116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.970307 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.970527 5116 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.969765 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188083ec8f344f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.967325045 +0000 UTC m=+0.431536801,LastTimestamp:2025-12-12 16:15:05.967325045 +0000 UTC m=+0.431536801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.970609 5116 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.970629 5116 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.970783 5116 server.go:317] "Adding debug handlers to kubelet server" Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.970930 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="200ms" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.972231 5116 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.972262 5116 factory.go:55] Registering systemd factory Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.972274 5116 factory.go:223] Registration of the systemd container factory successfully Dec 12 16:15:05 crc kubenswrapper[5116]: E1212 16:15:05.972713 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.976450 5116 factory.go:153] Registering CRI-O factory Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.976478 5116 factory.go:223] Registration of the crio container factory successfully Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.976515 5116 factory.go:103] Registering Raw factory Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.976533 5116 manager.go:1196] Started watching for new ooms in manager Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.977286 5116 manager.go:319] Starting recovery of all containers Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.995909 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.995969 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.995987 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.995999 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996011 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996024 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996032 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996045 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996068 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996080 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996089 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996115 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996127 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996135 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996150 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996161 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996188 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996201 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996217 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996236 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996270 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996282 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996296 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996309 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996322 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996337 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996349 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996363 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996389 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996404 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996413 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996425 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996436 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996449 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996459 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996468 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996481 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996490 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996502 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996513 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996525 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996536 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996548 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996561 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996572 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996584 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996608 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996618 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996631 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996642 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996655 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 16:15:05 crc kubenswrapper[5116]: I1212 16:15:05.996665 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000783 5116 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000821 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000835 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000847 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000857 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000877 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000886 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000895 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000905 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000915 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000924 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000935 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000944 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000953 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000964 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000974 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000983 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.000992 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001002 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001011 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001020 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001029 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001038 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001047 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001057 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001066 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001075 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001086 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001132 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001143 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001154 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001166 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001191 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001201 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001214 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001224 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001236 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001248 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001259 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001270 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001281 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001292 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001305 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001316 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001327 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001339 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001350 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001361 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001373 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001385 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001395 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001406 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001418 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001429 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001441 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001452 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001464 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001476 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001488 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001499 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001510 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001539 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001551 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001563 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001574 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001586 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001597 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001607 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001617 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001628 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001639 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001651 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001663 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001674 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001685 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001696 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001710 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001720 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001730 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001742 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001753 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001766 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001778 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001788 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001799 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001810 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001822 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001833 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001845 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001857 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001872 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001886 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001900 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001913 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001927 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001940 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001953 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001964 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001974 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001985 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.001997 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002007 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002019 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002031 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002042 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002054 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002313 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002324 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002336 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002349 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002359 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002372 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002382 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002394 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002405 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002415 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002426 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002436 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002447 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002458 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002470 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002481 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002492 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002502 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002527 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002538 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002550 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002563 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002574 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002586 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002598 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002608 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002619 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002630 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002642 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002653 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002665 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002676 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002691 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002701 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002735 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002752 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002766 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002778 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002791 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002804 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002815 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002826 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002837 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002848 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002859 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002869 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002880 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002892 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002903 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002914 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002926 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002938 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002952 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002963 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002974 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.002987 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003001 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003016 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003029 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003042 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003056 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003070 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003084 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003098 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003130 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003145 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003160 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003213 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003230 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003245 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003259 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003275 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003289 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003304 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003318 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003333 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003349 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003363 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003377 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003391 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003405 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003419 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003433 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003446 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003460 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003474 5116 reconstruct.go:97] "Volume reconstruction finished" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003483 5116 reconciler.go:26] "Reconciler: start to sync state" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.003736 5116 manager.go:324] Recovery completed Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.017565 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.021142 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.021191 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.021205 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.023889 5116 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.023905 5116 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.023928 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.029368 5116 policy_none.go:49] "None policy: Start" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.029391 5116 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.029403 5116 state_mem.go:35] "Initializing new in-memory state store" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.040896 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.043564 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.043625 5116 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.043671 5116 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.043688 5116 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.043837 5116 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.044444 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.070753 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.077552 5116 manager.go:341] "Starting Device Plugin manager" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.077949 5116 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.077974 5116 server.go:85] "Starting device plugin registration server" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.078565 5116 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.078593 5116 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.078875 5116 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.078973 5116 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.078987 5116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.084669 5116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.084783 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.144276 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.144550 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.145613 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.145695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.145735 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.146927 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147162 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147230 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147896 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147934 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147976 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.147987 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.148034 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.148051 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.149402 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.149518 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.149566 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150218 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150267 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.150382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.151784 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.151818 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.151836 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152531 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152577 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152592 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152634 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.152675 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.153550 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.153592 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.153616 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.153996 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154017 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154026 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154120 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.154993 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.155058 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.155775 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.156091 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.156167 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.171801 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="400ms" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.178894 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.179978 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.180035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.180049 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.180083 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.180647 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.181544 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206023 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206461 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206484 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.206515 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206658 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206718 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.206861 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207185 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207228 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207260 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207386 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207626 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207712 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.207777 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.208755 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209468 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209689 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209816 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209840 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209879 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209938 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.209979 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210036 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210281 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210331 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210341 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210395 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210651 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210704 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.210732 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.211037 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.215964 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.270087 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.276176 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313350 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313445 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313488 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313508 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313633 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313635 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313657 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313687 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313704 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313736 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313756 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313765 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313793 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313830 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313837 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313850 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313714 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313895 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313929 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313934 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313962 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313975 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.313991 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314008 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314019 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314052 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314085 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314125 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314155 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.314242 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.381478 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.383145 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.383212 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.383226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.383266 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.384041 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.483429 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.507819 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.516460 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: W1212 16:15:06.526262 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-7acb5e95a102df2eee6702551deeb4ccc7751f0876c2e1c9bceb3c7f3ae9356e WatchSource:0}: Error finding container 7acb5e95a102df2eee6702551deeb4ccc7751f0876c2e1c9bceb3c7f3ae9356e: Status 404 returned error can't find the container with id 7acb5e95a102df2eee6702551deeb4ccc7751f0876c2e1c9bceb3c7f3ae9356e Dec 12 16:15:06 crc kubenswrapper[5116]: W1212 16:15:06.527080 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-77fde369d13224b65dbb25f37e2a549320088ee4aa4da329300baa35ab81570e WatchSource:0}: Error finding container 77fde369d13224b65dbb25f37e2a549320088ee4aa4da329300baa35ab81570e: Status 404 returned error can't find the container with id 77fde369d13224b65dbb25f37e2a549320088ee4aa4da329300baa35ab81570e Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.533182 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:15:06 crc kubenswrapper[5116]: W1212 16:15:06.539711 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-cce639e997fec695fc798d96caf72a00d3fee225ce454fcafe7749fe0452dc4a WatchSource:0}: Error finding container cce639e997fec695fc798d96caf72a00d3fee225ce454fcafe7749fe0452dc4a: Status 404 returned error can't find the container with id cce639e997fec695fc798d96caf72a00d3fee225ce454fcafe7749fe0452dc4a Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.570898 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.572636 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="800ms" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.576713 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:06 crc kubenswrapper[5116]: W1212 16:15:06.590355 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-07b582d0c5a2ad496df8af56d58b5d036e1e7c345f1e29e58d742f7980aa4baf WatchSource:0}: Error finding container 07b582d0c5a2ad496df8af56d58b5d036e1e7c345f1e29e58d742f7980aa4baf: Status 404 returned error can't find the container with id 07b582d0c5a2ad496df8af56d58b5d036e1e7c345f1e29e58d742f7980aa4baf Dec 12 16:15:06 crc kubenswrapper[5116]: W1212 16:15:06.594118 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-2988bce884ce748a621080db1270996f050a3fb3297f6008b72203da5d56369b WatchSource:0}: Error finding container 2988bce884ce748a621080db1270996f050a3fb3297f6008b72203da5d56369b: Status 404 returned error can't find the container with id 2988bce884ce748a621080db1270996f050a3fb3297f6008b72203da5d56369b Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.783785 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.784648 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.788224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.788267 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.788282 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.788310 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: E1212 16:15:06.788759 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Dec 12 16:15:06 crc kubenswrapper[5116]: I1212 16:15:06.960917 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.047456 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2988bce884ce748a621080db1270996f050a3fb3297f6008b72203da5d56369b"} Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.050746 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"07b582d0c5a2ad496df8af56d58b5d036e1e7c345f1e29e58d742f7980aa4baf"} Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.051851 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"cce639e997fec695fc798d96caf72a00d3fee225ce454fcafe7749fe0452dc4a"} Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.053466 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"77fde369d13224b65dbb25f37e2a549320088ee4aa4da329300baa35ab81570e"} Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.054583 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7acb5e95a102df2eee6702551deeb4ccc7751f0876c2e1c9bceb3c7f3ae9356e"} Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.335327 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.374804 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="1.6s" Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.396201 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.532958 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.589544 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.591300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.591344 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.591357 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.591385 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.591905 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.922696 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:07 crc kubenswrapper[5116]: E1212 16:15:07.924009 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 16:15:07 crc kubenswrapper[5116]: I1212 16:15:07.960155 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.059543 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949" exitCode=0 Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.059660 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.059750 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.060393 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.060421 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.060433 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.060637 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.061453 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd" exitCode=0 Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.061529 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.061641 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.062410 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.062437 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.062447 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.062602 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.063296 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.063932 5116 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b" exitCode=0 Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.064005 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.064125 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.064150 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.064160 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.064196 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.066089 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.066136 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.066150 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.066379 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.067602 5116 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c" exitCode=0 Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.067676 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.067757 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.068764 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.068790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.068800 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.068968 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.070698 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.070733 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.070743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.070751 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9"} Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.070881 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.071365 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.071390 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.071400 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.071552 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5116]: I1212 16:15:08.959950 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Dec 12 16:15:08 crc kubenswrapper[5116]: E1212 16:15:08.976315 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="3.2s" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.076675 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.076854 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.077849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.077902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.077918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5116]: E1212 16:15:09.078291 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.086770 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.086815 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.086828 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.086948 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.087528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.087558 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.087568 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5116]: E1212 16:15:09.087801 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.094223 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.094257 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.094268 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.094277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.097148 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d" exitCode=0 Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.097277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d"} Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.097394 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.097466 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098025 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098128 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.098138 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5116]: E1212 16:15:09.098354 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5116]: E1212 16:15:09.098541 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.192901 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.201050 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.201095 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.201129 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5116]: I1212 16:15:09.201159 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.104917 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e7d8af57c911063390422beec2366789d549d855492d299ff2ff7007d75c8200"} Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.105236 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.106386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.106425 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.106439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:10 crc kubenswrapper[5116]: E1212 16:15:10.106725 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109090 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8" exitCode=0 Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109212 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8"} Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109248 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109328 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109359 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.109469 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110600 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110650 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110666 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110606 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110605 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110934 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.110962 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:10 crc kubenswrapper[5116]: E1212 16:15:10.111437 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:10 crc kubenswrapper[5116]: E1212 16:15:10.111468 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:10 crc kubenswrapper[5116]: E1212 16:15:10.111991 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:10 crc kubenswrapper[5116]: I1212 16:15:10.550450 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.116314 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b"} Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.116359 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118"} Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.116369 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460"} Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.116718 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.117235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.117268 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:11 crc kubenswrapper[5116]: I1212 16:15:11.117278 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:11 crc kubenswrapper[5116]: E1212 16:15:11.117617 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.049344 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.049612 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.051249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.051315 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.051339 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:12 crc kubenswrapper[5116]: E1212 16:15:12.052016 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.123834 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46"} Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.123884 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215"} Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124037 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124087 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124964 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.124925 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.125053 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.125074 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:12 crc kubenswrapper[5116]: E1212 16:15:12.125281 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:12 crc kubenswrapper[5116]: E1212 16:15:12.125580 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:12 crc kubenswrapper[5116]: I1212 16:15:12.161695 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:13 crc kubenswrapper[5116]: I1212 16:15:13.126961 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:13 crc kubenswrapper[5116]: I1212 16:15:13.128215 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:13 crc kubenswrapper[5116]: I1212 16:15:13.128311 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:13 crc kubenswrapper[5116]: I1212 16:15:13.128332 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:13 crc kubenswrapper[5116]: E1212 16:15:13.129058 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.048778 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.129461 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.130249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.130294 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.130310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:14 crc kubenswrapper[5116]: E1212 16:15:14.130699 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.158905 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.159457 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.160628 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.160693 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.160708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:14 crc kubenswrapper[5116]: E1212 16:15:14.161175 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.964845 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.969187 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.969510 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.971160 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.971213 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.971226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:14 crc kubenswrapper[5116]: E1212 16:15:14.971562 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:14 crc kubenswrapper[5116]: I1212 16:15:14.977349 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.131777 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.131908 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132350 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:15 crc kubenswrapper[5116]: E1212 16:15:15.132608 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132680 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132726 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.132741 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:15 crc kubenswrapper[5116]: E1212 16:15:15.133069 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.391725 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.392002 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.393285 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.393325 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:15 crc kubenswrapper[5116]: I1212 16:15:15.393339 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:15 crc kubenswrapper[5116]: E1212 16:15:15.393649 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:16 crc kubenswrapper[5116]: E1212 16:15:16.085153 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.335606 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.336528 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.337916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.337970 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.337990 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:16 crc kubenswrapper[5116]: E1212 16:15:16.338486 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:16 crc kubenswrapper[5116]: I1212 16:15:16.478478 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:17 crc kubenswrapper[5116]: I1212 16:15:17.138045 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:17 crc kubenswrapper[5116]: I1212 16:15:17.139374 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:17 crc kubenswrapper[5116]: I1212 16:15:17.139415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:17 crc kubenswrapper[5116]: I1212 16:15:17.139431 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:17 crc kubenswrapper[5116]: E1212 16:15:17.140006 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:17 crc kubenswrapper[5116]: I1212 16:15:17.143413 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:18 crc kubenswrapper[5116]: I1212 16:15:18.140034 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:18 crc kubenswrapper[5116]: I1212 16:15:18.140721 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:18 crc kubenswrapper[5116]: I1212 16:15:18.140751 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:18 crc kubenswrapper[5116]: I1212 16:15:18.140761 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:18 crc kubenswrapper[5116]: E1212 16:15:18.141046 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:19 crc kubenswrapper[5116]: E1212 16:15:19.202630 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.286356 5116 trace.go:236] Trace[876865082]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:09.284) (total time: 10001ms): Dec 12 16:15:19 crc kubenswrapper[5116]: Trace[876865082]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:15:19.286) Dec 12 16:15:19 crc kubenswrapper[5116]: Trace[876865082]: [10.001452838s] [10.001452838s] END Dec 12 16:15:19 crc kubenswrapper[5116]: E1212 16:15:19.286403 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.479204 5116 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.479309 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.657140 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.657237 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.661834 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:19 crc kubenswrapper[5116]: I1212 16:15:19.661895 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.828432 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.828822 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.830083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.830140 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.830151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:21 crc kubenswrapper[5116]: E1212 16:15:21.830577 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:21 crc kubenswrapper[5116]: I1212 16:15:21.850773 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.149783 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.151131 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.151239 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.151332 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:22 crc kubenswrapper[5116]: E1212 16:15:22.151843 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.163010 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 16:15:22 crc kubenswrapper[5116]: E1212 16:15:22.177403 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.403559 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.405573 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.405624 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.405636 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:22 crc kubenswrapper[5116]: I1212 16:15:22.405667 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:22 crc kubenswrapper[5116]: E1212 16:15:22.413001 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:23 crc kubenswrapper[5116]: I1212 16:15:23.151996 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:23 crc kubenswrapper[5116]: I1212 16:15:23.152773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:23 crc kubenswrapper[5116]: I1212 16:15:23.152820 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:23 crc kubenswrapper[5116]: I1212 16:15:23.152831 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:23 crc kubenswrapper[5116]: E1212 16:15:23.153285 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:23 crc kubenswrapper[5116]: E1212 16:15:23.864556 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.168158 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.168508 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.169037 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.169160 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.169491 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.169567 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.169584 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.169977 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.173188 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.649431 5116 trace.go:236] Trace[1071341855]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:10.226) (total time: 14422ms): Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[1071341855]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14422ms (16:15:24.649) Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[1071341855]: [14.422674213s] [14.422674213s] END Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.649489 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.649383 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec8f344f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.967325045 +0000 UTC m=+0.431536801,LastTimestamp:2025-12-12 16:15:05.967325045 +0000 UTC m=+0.431536801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.649695 5116 trace.go:236] Trace[621950392]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:09.878) (total time: 14770ms): Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[621950392]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14770ms (16:15:24.649) Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[621950392]: [14.770885664s] [14.770885664s] END Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.649714 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.649839 5116 trace.go:236] Trace[2094112424]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:10.603) (total time: 14045ms): Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[2094112424]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14045ms (16:15:24.649) Dec 12 16:15:24 crc kubenswrapper[5116]: Trace[2094112424]: [14.045903062s] [14.045903062s] END Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.649850 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.650718 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.650950 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.655568 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.655581 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.663783 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.669484 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec95fbab3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.081053498 +0000 UTC m=+0.545265274,LastTimestamp:2025-12-12 16:15:06.081053498 +0000 UTC m=+0.545265274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.675179 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.145670535 +0000 UTC m=+0.609882311,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.680112 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.145726338 +0000 UTC m=+0.609938104,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.689285 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.145742969 +0000 UTC m=+0.609954745,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.694493 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.147914668 +0000 UTC m=+0.612126434,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.700225 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.14796782 +0000 UTC m=+0.612179596,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.704854 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.147985901 +0000 UTC m=+0.612197677,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.709964 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.148018242 +0000 UTC m=+0.612230008,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.714011 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.148042532 +0000 UTC m=+0.612254288,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.718787 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.148058533 +0000 UTC m=+0.612270289,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.723820 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.150239534 +0000 UTC m=+0.614451300,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.728953 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.150259364 +0000 UTC m=+0.614471130,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.734838 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.150357767 +0000 UTC m=+0.614569523,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.742480 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.150376558 +0000 UTC m=+0.614588314,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.746733 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.150386638 +0000 UTC m=+0.614598384,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.751483 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.150366398 +0000 UTC m=+0.614578174,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.754707 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.152555728 +0000 UTC m=+0.616767484,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.758796 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.152586069 +0000 UTC m=+0.616797835,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.762661 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a857a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a857a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021209466 +0000 UTC m=+0.485421222,LastTimestamp:2025-12-12 16:15:06.152598209 +0000 UTC m=+0.616809965,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.769927 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a0d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a0d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021178675 +0000 UTC m=+0.485390431,LastTimestamp:2025-12-12 16:15:06.152654111 +0000 UTC m=+0.616865867,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.774546 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083ec926a5bd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083ec926a5bd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.021198806 +0000 UTC m=+0.485410562,LastTimestamp:2025-12-12 16:15:06.152669631 +0000 UTC m=+0.616881387,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.779594 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecb0f609e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.53366935 +0000 UTC m=+0.997881106,LastTimestamp:2025-12-12 16:15:06.53366935 +0000 UTC m=+0.997881106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.784182 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ecb0f79a43 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.533771843 +0000 UTC m=+0.997983599,LastTimestamp:2025-12-12 16:15:06.533771843 +0000 UTC m=+0.997983599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.790574 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ecb17b6ebc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.542411452 +0000 UTC m=+1.006623198,LastTimestamp:2025-12-12 16:15:06.542411452 +0000 UTC m=+1.006623198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.795239 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ecb4a594e1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.595505377 +0000 UTC m=+1.059717163,LastTimestamp:2025-12-12 16:15:06.595505377 +0000 UTC m=+1.059717163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.799315 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ecb4b43c03 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.596465667 +0000 UTC m=+1.060677453,LastTimestamp:2025-12-12 16:15:06.596465667 +0000 UTC m=+1.060677453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.803687 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ecd0f9dac4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.07079034 +0000 UTC m=+1.535002096,LastTimestamp:2025-12-12 16:15:07.07079034 +0000 UTC m=+1.535002096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.807686 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ecd1135245 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.072459333 +0000 UTC m=+1.536671089,LastTimestamp:2025-12-12 16:15:07.072459333 +0000 UTC m=+1.536671089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.812158 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecd113550b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.072460043 +0000 UTC m=+1.536671799,LastTimestamp:2025-12-12 16:15:07.072460043 +0000 UTC m=+1.536671799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.816294 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ecd11b1e79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.072970361 +0000 UTC m=+1.537182117,LastTimestamp:2025-12-12 16:15:07.072970361 +0000 UTC m=+1.537182117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.821740 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ecd1237a80 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.073518208 +0000 UTC m=+1.537729964,LastTimestamp:2025-12-12 16:15:07.073518208 +0000 UTC m=+1.537729964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.826345 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ecd1b791cd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.083223501 +0000 UTC m=+1.547435247,LastTimestamp:2025-12-12 16:15:07.083223501 +0000 UTC m=+1.547435247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.830231 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecd1c75dd7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.084258775 +0000 UTC m=+1.548470521,LastTimestamp:2025-12-12 16:15:07.084258775 +0000 UTC m=+1.548470521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.835084 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecd1db82d8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.085578968 +0000 UTC m=+1.549790744,LastTimestamp:2025-12-12 16:15:07.085578968 +0000 UTC m=+1.549790744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.840488 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ecd1df062b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.085809195 +0000 UTC m=+1.550020951,LastTimestamp:2025-12-12 16:15:07.085809195 +0000 UTC m=+1.550020951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.847485 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ecd1e15476 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.08596031 +0000 UTC m=+1.550172066,LastTimestamp:2025-12-12 16:15:07.08596031 +0000 UTC m=+1.550172066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.851665 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ecd1e41ce8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.086142696 +0000 UTC m=+1.550354452,LastTimestamp:2025-12-12 16:15:07.086142696 +0000 UTC m=+1.550354452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.856320 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ece225be36 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.358879286 +0000 UTC m=+1.823091042,LastTimestamp:2025-12-12 16:15:07.358879286 +0000 UTC m=+1.823091042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.860647 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ece2bd4f8b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.368812427 +0000 UTC m=+1.833024173,LastTimestamp:2025-12-12 16:15:07.368812427 +0000 UTC m=+1.833024173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.864989 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ece2cd9616 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.369879062 +0000 UTC m=+1.834090818,LastTimestamp:2025-12-12 16:15:07.369879062 +0000 UTC m=+1.834090818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.869272 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecf607f9ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.692472747 +0000 UTC m=+2.156684503,LastTimestamp:2025-12-12 16:15:07.692472747 +0000 UTC m=+2.156684503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.874605 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecf6c34a6a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.70474865 +0000 UTC m=+2.168960396,LastTimestamp:2025-12-12 16:15:07.70474865 +0000 UTC m=+2.168960396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.879875 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ecf6d10c2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.705650219 +0000 UTC m=+2.169861975,LastTimestamp:2025-12-12 16:15:07.705650219 +0000 UTC m=+2.169861975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.885487 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ed03d749b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.924162997 +0000 UTC m=+2.388374753,LastTimestamp:2025-12-12 16:15:07.924162997 +0000 UTC m=+2.388374753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.893544 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ed049b131e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:07.936994078 +0000 UTC m=+2.401205834,LastTimestamp:2025-12-12 16:15:07.936994078 +0000 UTC m=+2.401205834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.899304 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed0c1d5bac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.062972844 +0000 UTC m=+2.527184600,LastTimestamp:2025-12-12 16:15:08.062972844 +0000 UTC m=+2.527184600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.904034 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed0c238b50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.063378256 +0000 UTC m=+2.527590012,LastTimestamp:2025-12-12 16:15:08.063378256 +0000 UTC m=+2.527590012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.912202 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ed0c636ae1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.067564257 +0000 UTC m=+2.531776013,LastTimestamp:2025-12-12 16:15:08.067564257 +0000 UTC m=+2.531776013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.917850 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed0cc6ea1b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.074084891 +0000 UTC m=+2.538296647,LastTimestamp:2025-12-12 16:15:08.074084891 +0000 UTC m=+2.538296647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.924391 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed1b4c0ebf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.317691583 +0000 UTC m=+2.781903339,LastTimestamp:2025-12-12 16:15:08.317691583 +0000 UTC m=+2.781903339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.929897 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed1b7ec101 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.321014017 +0000 UTC m=+2.785225773,LastTimestamp:2025-12-12 16:15:08.321014017 +0000 UTC m=+2.785225773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.936806 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed1b90d384 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.322198404 +0000 UTC m=+2.786410160,LastTimestamp:2025-12-12 16:15:08.322198404 +0000 UTC m=+2.786410160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.952005 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ed1b953207 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.322484743 +0000 UTC m=+2.786696499,LastTimestamp:2025-12-12 16:15:08.322484743 +0000 UTC m=+2.786696499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.957046 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed1c11a634 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.330640948 +0000 UTC m=+2.794852704,LastTimestamp:2025-12-12 16:15:08.330640948 +0000 UTC m=+2.794852704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.963195 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed1c2557ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.331931628 +0000 UTC m=+2.796143384,LastTimestamp:2025-12-12 16:15:08.331931628 +0000 UTC m=+2.796143384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: I1212 16:15:24.963907 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.970015 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed1c6cc1c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.336611784 +0000 UTC m=+2.800823540,LastTimestamp:2025-12-12 16:15:08.336611784 +0000 UTC m=+2.800823540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.975952 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed1c7c95bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.337649087 +0000 UTC m=+2.801860843,LastTimestamp:2025-12-12 16:15:08.337649087 +0000 UTC m=+2.801860843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.982852 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ed1cbab108 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.341719304 +0000 UTC m=+2.805931060,LastTimestamp:2025-12-12 16:15:08.341719304 +0000 UTC m=+2.805931060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.988289 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed1ddcfa37 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.360743479 +0000 UTC m=+2.824955235,LastTimestamp:2025-12-12 16:15:08.360743479 +0000 UTC m=+2.824955235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.992728 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed28177f32 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.53235077 +0000 UTC m=+2.996562526,LastTimestamp:2025-12-12 16:15:08.53235077 +0000 UTC m=+2.996562526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5116]: E1212 16:15:24.998581 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed2872e1a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.538339747 +0000 UTC m=+3.002551503,LastTimestamp:2025-12-12 16:15:08.538339747 +0000 UTC m=+3.002551503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.005376 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed28cb1a40 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.544121408 +0000 UTC m=+3.008333174,LastTimestamp:2025-12-12 16:15:08.544121408 +0000 UTC m=+3.008333174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.010855 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed28dffcaa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.54549009 +0000 UTC m=+3.009701846,LastTimestamp:2025-12-12 16:15:08.54549009 +0000 UTC m=+3.009701846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.015723 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed297d58c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.555802823 +0000 UTC m=+3.020014579,LastTimestamp:2025-12-12 16:15:08.555802823 +0000 UTC m=+3.020014579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.021675 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed298b2649 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.556707401 +0000 UTC m=+3.020919157,LastTimestamp:2025-12-12 16:15:08.556707401 +0000 UTC m=+3.020919157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.026021 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed35d4f6f8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.762871544 +0000 UTC m=+3.227083300,LastTimestamp:2025-12-12 16:15:08.762871544 +0000 UTC m=+3.227083300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.030662 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed36adeeda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.777090778 +0000 UTC m=+3.241302534,LastTimestamp:2025-12-12 16:15:08.777090778 +0000 UTC m=+3.241302534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.034592 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ed36d5c862 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.77970237 +0000 UTC m=+3.243914126,LastTimestamp:2025-12-12 16:15:08.77970237 +0000 UTC m=+3.243914126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.039290 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed3790d352 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.791960402 +0000 UTC m=+3.256172158,LastTimestamp:2025-12-12 16:15:08.791960402 +0000 UTC m=+3.256172158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.043326 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed37bfcdfd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.795039229 +0000 UTC m=+3.259250985,LastTimestamp:2025-12-12 16:15:08.795039229 +0000 UTC m=+3.259250985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.049840 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed435f103f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:08.990025791 +0000 UTC m=+3.454237547,LastTimestamp:2025-12-12 16:15:08.990025791 +0000 UTC m=+3.454237547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.054138 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed44309b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.003758451 +0000 UTC m=+3.467970207,LastTimestamp:2025-12-12 16:15:09.003758451 +0000 UTC m=+3.467970207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.058596 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed44486715 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.005317909 +0000 UTC m=+3.469529735,LastTimestamp:2025-12-12 16:15:09.005317909 +0000 UTC m=+3.469529735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.064023 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed49e8273f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.099673407 +0000 UTC m=+3.563885163,LastTimestamp:2025-12-12 16:15:09.099673407 +0000 UTC m=+3.563885163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.065318 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed50efc755 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.217613653 +0000 UTC m=+3.681825409,LastTimestamp:2025-12-12 16:15:09.217613653 +0000 UTC m=+3.681825409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.069337 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed51ed8024 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.234241572 +0000 UTC m=+3.698453328,LastTimestamp:2025-12-12 16:15:09.234241572 +0000 UTC m=+3.698453328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.074005 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed5646f3e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.307212772 +0000 UTC m=+3.771424528,LastTimestamp:2025-12-12 16:15:09.307212772 +0000 UTC m=+3.771424528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.081490 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed57165427 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.320803367 +0000 UTC m=+3.785015143,LastTimestamp:2025-12-12 16:15:09.320803367 +0000 UTC m=+3.785015143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.089219 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ed8659708c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.1137307 +0000 UTC m=+4.577942486,LastTimestamp:2025-12-12 16:15:10.1137307 +0000 UTC m=+4.577942486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.094322 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eda48bfd3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.620359998 +0000 UTC m=+5.084571754,LastTimestamp:2025-12-12 16:15:10.620359998 +0000 UTC m=+5.084571754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.099390 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eda56e74e3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.635201763 +0000 UTC m=+5.099413519,LastTimestamp:2025-12-12 16:15:10.635201763 +0000 UTC m=+5.099413519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.104416 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eda57e55a1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.636242337 +0000 UTC m=+5.100454083,LastTimestamp:2025-12-12 16:15:10.636242337 +0000 UTC m=+5.100454083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.109472 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edb17471e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.836920802 +0000 UTC m=+5.301132558,LastTimestamp:2025-12-12 16:15:10.836920802 +0000 UTC m=+5.301132558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.115543 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edb30e2ab2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.863772338 +0000 UTC m=+5.327984094,LastTimestamp:2025-12-12 16:15:10.863772338 +0000 UTC m=+5.327984094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.122270 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edb3232d77 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:10.865149303 +0000 UTC m=+5.329361069,LastTimestamp:2025-12-12 16:15:10.865149303 +0000 UTC m=+5.329361069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.128577 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edc1269481 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.100253313 +0000 UTC m=+5.564465069,LastTimestamp:2025-12-12 16:15:11.100253313 +0000 UTC m=+5.564465069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.132960 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edc1d006ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.111358126 +0000 UTC m=+5.575569882,LastTimestamp:2025-12-12 16:15:11.111358126 +0000 UTC m=+5.575569882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.137490 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edc1e14dcb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.112490443 +0000 UTC m=+5.576702199,LastTimestamp:2025-12-12 16:15:11.112490443 +0000 UTC m=+5.576702199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.141636 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edce2bbeae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.318695598 +0000 UTC m=+5.782907354,LastTimestamp:2025-12-12 16:15:11.318695598 +0000 UTC m=+5.782907354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.145645 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edced377fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.329687547 +0000 UTC m=+5.793899303,LastTimestamp:2025-12-12 16:15:11.329687547 +0000 UTC m=+5.793899303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.153373 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edcee72e0b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.330979339 +0000 UTC m=+5.795191095,LastTimestamp:2025-12-12 16:15:11.330979339 +0000 UTC m=+5.795191095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.155982 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.156734 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.156766 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.156779 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.157143 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.158227 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edd9b5ae09 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.512284681 +0000 UTC m=+5.976496437,LastTimestamp:2025-12-12 16:15:11.512284681 +0000 UTC m=+5.976496437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.162759 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083edda7bf410 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.525278736 +0000 UTC m=+5.989490492,LastTimestamp:2025-12-12 16:15:11.525278736 +0000 UTC m=+5.989490492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.168702 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-controller-manager-crc.188083efb49449d8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 12 16:15:25 crc kubenswrapper[5116]: body: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.479273944 +0000 UTC m=+13.943485700,LastTimestamp:2025-12-12 16:15:19.479273944 +0000 UTC m=+13.943485700,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.170234 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083efb495a156 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.479361878 +0000 UTC m=+13.943573634,LastTimestamp:2025-12-12 16:15:19.479361878 +0000 UTC m=+13.943573634,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.173252 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.188083efbf2f4f16 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 16:15:25 crc kubenswrapper[5116]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:25 crc kubenswrapper[5116]: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.657205526 +0000 UTC m=+14.121417292,LastTimestamp:2025-12-12 16:15:19.657205526 +0000 UTC m=+14.121417292,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.175308 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083efbf302cc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.657262277 +0000 UTC m=+14.121474033,LastTimestamp:2025-12-12 16:15:19.657262277 +0000 UTC m=+14.121474033,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.178508 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083efbf2f4f16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.188083efbf2f4f16 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 16:15:25 crc kubenswrapper[5116]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:25 crc kubenswrapper[5116]: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.657205526 +0000 UTC m=+14.121417292,LastTimestamp:2025-12-12 16:15:19.66187402 +0000 UTC m=+14.126085776,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.180829 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083efbf302cc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083efbf302cc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:19.657262277 +0000 UTC m=+14.121474033,LastTimestamp:2025-12-12 16:15:19.661916161 +0000 UTC m=+14.126127917,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.185453 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.188083f0cc1d4f4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 16:15:25 crc kubenswrapper[5116]: body: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:24.169097034 +0000 UTC m=+18.633308830,LastTimestamp:2025-12-12 16:15:24.169097034 +0000 UTC m=+18.633308830,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.191060 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0cc1f384a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:24.169222218 +0000 UTC m=+18.633434024,LastTimestamp:2025-12-12 16:15:24.169222218 +0000 UTC m=+18.633434024,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.271644 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33348->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.271822 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33348->192.168.126.11:17697: read: connection reset by peer" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.276226 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.188083f10dd66ab5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:33348->192.168.126.11:17697: read: connection reset by peer Dec 12 16:15:25 crc kubenswrapper[5116]: body: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:25.271747253 +0000 UTC m=+19.735959019,LastTimestamp:2025-12-12 16:15:25.271747253 +0000 UTC m=+19.735959019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.280459 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f10dd82f1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33348->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:25.271863066 +0000 UTC m=+19.736074842,LastTimestamp:2025-12-12 16:15:25.271863066 +0000 UTC m=+19.736074842,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.493648 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.493712 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.498301 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:25 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.188083f11b1108cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 16:15:25 crc kubenswrapper[5116]: body: Dec 12 16:15:25 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:25.493692621 +0000 UTC m=+19.957904377,LastTimestamp:2025-12-12 16:15:25.493692621 +0000 UTC m=+19.957904377,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:25 crc kubenswrapper[5116]: > Dec 12 16:15:25 crc kubenswrapper[5116]: E1212 16:15:25.506332 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f11b11a748 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:25.493733192 +0000 UTC m=+19.957944948,LastTimestamp:2025-12-12 16:15:25.493733192 +0000 UTC m=+19.957944948,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5116]: I1212 16:15:25.967161 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:26 crc kubenswrapper[5116]: E1212 16:15:26.085472 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.485161 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.485423 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.486447 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.486494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.486509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:26 crc kubenswrapper[5116]: E1212 16:15:26.486878 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.490589 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:26 crc kubenswrapper[5116]: I1212 16:15:26.965526 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.167859 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.170523 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e7d8af57c911063390422beec2366789d549d855492d299ff2ff7007d75c8200" exitCode=255 Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.170728 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.170966 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e7d8af57c911063390422beec2366789d549d855492d299ff2ff7007d75c8200"} Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171090 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171621 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171665 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171679 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171764 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171830 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.171846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:27 crc kubenswrapper[5116]: E1212 16:15:27.171979 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:27 crc kubenswrapper[5116]: E1212 16:15:27.172294 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.172308 5116 scope.go:117] "RemoveContainer" containerID="e7d8af57c911063390422beec2366789d549d855492d299ff2ff7007d75c8200" Dec 12 16:15:27 crc kubenswrapper[5116]: E1212 16:15:27.184941 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed44486715\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed44486715 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.005317909 +0000 UTC m=+3.469529735,LastTimestamp:2025-12-12 16:15:27.173351597 +0000 UTC m=+21.637563363,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:27 crc kubenswrapper[5116]: E1212 16:15:27.369078 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed50efc755\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed50efc755 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.217613653 +0000 UTC m=+3.681825409,LastTimestamp:2025-12-12 16:15:27.363406951 +0000 UTC m=+21.827618717,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:27 crc kubenswrapper[5116]: E1212 16:15:27.379790 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed51ed8024\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed51ed8024 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.234241572 +0000 UTC m=+3.698453328,LastTimestamp:2025-12-12 16:15:27.375426273 +0000 UTC m=+21.839638029,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:27 crc kubenswrapper[5116]: I1212 16:15:27.964746 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.176538 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.178594 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb"} Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.178919 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.179601 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.179660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.179676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:28 crc kubenswrapper[5116]: E1212 16:15:28.180087 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:28 crc kubenswrapper[5116]: E1212 16:15:28.582743 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.813948 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.815155 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.815206 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.815222 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.815256 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:28 crc kubenswrapper[5116]: E1212 16:15:28.823839 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:28 crc kubenswrapper[5116]: I1212 16:15:28.965231 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:29 crc kubenswrapper[5116]: E1212 16:15:29.560011 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:29 crc kubenswrapper[5116]: E1212 16:15:29.710374 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:29 crc kubenswrapper[5116]: E1212 16:15:29.748787 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:29 crc kubenswrapper[5116]: I1212 16:15:29.970488 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:30 crc kubenswrapper[5116]: I1212 16:15:30.967136 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:31 crc kubenswrapper[5116]: E1212 16:15:31.132498 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:31 crc kubenswrapper[5116]: I1212 16:15:31.967840 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:32 crc kubenswrapper[5116]: I1212 16:15:32.963986 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.195387 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.196366 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.199049 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" exitCode=255 Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.199088 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb"} Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.199152 5116 scope.go:117] "RemoveContainer" containerID="e7d8af57c911063390422beec2366789d549d855492d299ff2ff7007d75c8200" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.199382 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.200361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.200392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.200403 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:33 crc kubenswrapper[5116]: E1212 16:15:33.200736 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.201036 5116 scope.go:117] "RemoveContainer" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" Dec 12 16:15:33 crc kubenswrapper[5116]: E1212 16:15:33.201265 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:33 crc kubenswrapper[5116]: E1212 16:15:33.206093 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:33 crc kubenswrapper[5116]: I1212 16:15:33.964743 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:34 crc kubenswrapper[5116]: I1212 16:15:34.203929 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:34 crc kubenswrapper[5116]: I1212 16:15:34.969268 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.492807 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.493198 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.494785 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.494855 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.494876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:35 crc kubenswrapper[5116]: E1212 16:15:35.495393 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.495773 5116 scope.go:117] "RemoveContainer" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" Dec 12 16:15:35 crc kubenswrapper[5116]: E1212 16:15:35.496044 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:35 crc kubenswrapper[5116]: E1212 16:15:35.502645 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f2e678d60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:35.495998919 +0000 UTC m=+29.960210675,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:35 crc kubenswrapper[5116]: E1212 16:15:35.589587 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.824764 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.825870 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.825910 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.825924 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.825953 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:35 crc kubenswrapper[5116]: E1212 16:15:35.837267 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:35 crc kubenswrapper[5116]: I1212 16:15:35.964527 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:36 crc kubenswrapper[5116]: E1212 16:15:36.085710 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:36 crc kubenswrapper[5116]: E1212 16:15:36.616059 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:36 crc kubenswrapper[5116]: I1212 16:15:36.964049 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:37 crc kubenswrapper[5116]: E1212 16:15:37.407578 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:37 crc kubenswrapper[5116]: I1212 16:15:37.964448 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.179669 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.179957 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.181153 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.181236 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.181255 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:38 crc kubenswrapper[5116]: E1212 16:15:38.181902 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.182372 5116 scope.go:117] "RemoveContainer" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" Dec 12 16:15:38 crc kubenswrapper[5116]: E1212 16:15:38.182674 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:38 crc kubenswrapper[5116]: E1212 16:15:38.187848 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f2e678d60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:38.182625155 +0000 UTC m=+32.646836931,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:38 crc kubenswrapper[5116]: I1212 16:15:38.963186 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:39 crc kubenswrapper[5116]: I1212 16:15:39.968481 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:40 crc kubenswrapper[5116]: I1212 16:15:40.964924 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:41 crc kubenswrapper[5116]: E1212 16:15:41.087181 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:41 crc kubenswrapper[5116]: I1212 16:15:41.960560 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:42 crc kubenswrapper[5116]: E1212 16:15:42.597332 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.837718 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.838701 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.838745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.838759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.838790 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:42 crc kubenswrapper[5116]: E1212 16:15:42.847635 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:42 crc kubenswrapper[5116]: I1212 16:15:42.966950 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:43 crc kubenswrapper[5116]: I1212 16:15:43.964657 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:44 crc kubenswrapper[5116]: I1212 16:15:44.965514 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:45 crc kubenswrapper[5116]: I1212 16:15:45.966425 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:46 crc kubenswrapper[5116]: E1212 16:15:46.087667 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:46 crc kubenswrapper[5116]: I1212 16:15:46.967879 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:47 crc kubenswrapper[5116]: I1212 16:15:47.966338 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:48 crc kubenswrapper[5116]: I1212 16:15:48.965490 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:49 crc kubenswrapper[5116]: E1212 16:15:49.604325 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.848791 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.851178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.851310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.851398 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.851507 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:49 crc kubenswrapper[5116]: E1212 16:15:49.862568 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:49 crc kubenswrapper[5116]: I1212 16:15:49.965597 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:50 crc kubenswrapper[5116]: I1212 16:15:50.967847 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:51 crc kubenswrapper[5116]: I1212 16:15:51.969789 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:52 crc kubenswrapper[5116]: E1212 16:15:52.052768 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:52 crc kubenswrapper[5116]: E1212 16:15:52.686231 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:52 crc kubenswrapper[5116]: I1212 16:15:52.965175 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.044965 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.045742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.045769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.045782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:53 crc kubenswrapper[5116]: E1212 16:15:53.046197 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.046464 5116 scope.go:117] "RemoveContainer" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" Dec 12 16:15:53 crc kubenswrapper[5116]: E1212 16:15:53.053437 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed44486715\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed44486715 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.005317909 +0000 UTC m=+3.469529735,LastTimestamp:2025-12-12 16:15:53.047579304 +0000 UTC m=+47.511791060,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.258449 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:53 crc kubenswrapper[5116]: E1212 16:15:53.262279 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed50efc755\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed50efc755 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.217613653 +0000 UTC m=+3.681825409,LastTimestamp:2025-12-12 16:15:53.253458931 +0000 UTC m=+47.717670727,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:53 crc kubenswrapper[5116]: E1212 16:15:53.268382 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ed51ed8024\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ed51ed8024 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:09.234241572 +0000 UTC m=+3.698453328,LastTimestamp:2025-12-12 16:15:53.266352277 +0000 UTC m=+47.730564043,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:53 crc kubenswrapper[5116]: I1212 16:15:53.966521 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.268460 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.268983 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.271285 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" exitCode=255 Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.271379 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78"} Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.271440 5116 scope.go:117] "RemoveContainer" containerID="848b0d0c341114ae75d1b8a6945257a487fb7f45ec7ba01736b23f41a527dfdb" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.271737 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.272646 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.272693 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.272711 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:54 crc kubenswrapper[5116]: E1212 16:15:54.273173 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.273506 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:15:54 crc kubenswrapper[5116]: E1212 16:15:54.273853 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:54 crc kubenswrapper[5116]: E1212 16:15:54.280924 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f2e678d60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:54.27380372 +0000 UTC m=+48.738015476,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:54 crc kubenswrapper[5116]: I1212 16:15:54.966523 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.277071 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.402279 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.402615 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.403603 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.403684 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.403697 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:55 crc kubenswrapper[5116]: E1212 16:15:55.404152 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.493323 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.493528 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.494479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.494509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.494521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:55 crc kubenswrapper[5116]: E1212 16:15:55.494828 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.495099 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:15:55 crc kubenswrapper[5116]: E1212 16:15:55.495346 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:55 crc kubenswrapper[5116]: E1212 16:15:55.503518 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f2e678d60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:55.49532116 +0000 UTC m=+49.959532916,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:55 crc kubenswrapper[5116]: I1212 16:15:55.965716 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:56 crc kubenswrapper[5116]: E1212 16:15:56.088925 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:56 crc kubenswrapper[5116]: E1212 16:15:56.203379 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:56 crc kubenswrapper[5116]: E1212 16:15:56.610194 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.863063 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.864298 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.864347 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.864359 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.864382 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:56 crc kubenswrapper[5116]: E1212 16:15:56.879747 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:56 crc kubenswrapper[5116]: I1212 16:15:56.964751 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:57 crc kubenswrapper[5116]: I1212 16:15:57.969175 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.179980 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.180500 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.182581 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.182645 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.182664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:58 crc kubenswrapper[5116]: E1212 16:15:58.185802 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.186495 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:15:58 crc kubenswrapper[5116]: E1212 16:15:58.187416 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:58 crc kubenswrapper[5116]: E1212 16:15:58.194168 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f2e678d60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f2e678d60c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:33.201237516 +0000 UTC m=+27.665449272,LastTimestamp:2025-12-12 16:15:58.187313479 +0000 UTC m=+52.651525275,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:58 crc kubenswrapper[5116]: I1212 16:15:58.964222 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:59 crc kubenswrapper[5116]: I1212 16:15:59.966934 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:00 crc kubenswrapper[5116]: I1212 16:16:00.968910 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:01 crc kubenswrapper[5116]: I1212 16:16:01.962299 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:02 crc kubenswrapper[5116]: I1212 16:16:02.969466 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:03 crc kubenswrapper[5116]: E1212 16:16:03.239842 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:16:03 crc kubenswrapper[5116]: E1212 16:16:03.620361 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.880587 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.882890 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.882965 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.882984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.883026 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:16:03 crc kubenswrapper[5116]: E1212 16:16:03.900094 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:16:03 crc kubenswrapper[5116]: I1212 16:16:03.969543 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:04 crc kubenswrapper[5116]: I1212 16:16:04.969071 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:05 crc kubenswrapper[5116]: I1212 16:16:05.969466 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:06 crc kubenswrapper[5116]: E1212 16:16:06.089367 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:16:06 crc kubenswrapper[5116]: I1212 16:16:06.963886 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:07 crc kubenswrapper[5116]: I1212 16:16:07.967163 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:08 crc kubenswrapper[5116]: I1212 16:16:08.965036 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:09 crc kubenswrapper[5116]: I1212 16:16:09.966372 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:10 crc kubenswrapper[5116]: E1212 16:16:10.627654 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.900913 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.902156 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.902203 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.902227 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.902257 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:16:10 crc kubenswrapper[5116]: E1212 16:16:10.912674 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:16:10 crc kubenswrapper[5116]: I1212 16:16:10.964141 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:11 crc kubenswrapper[5116]: I1212 16:16:11.967943 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:12 crc kubenswrapper[5116]: I1212 16:16:11.999997 5116 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-lz87n" Dec 12 16:16:12 crc kubenswrapper[5116]: I1212 16:16:12.007401 5116 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-lz87n" Dec 12 16:16:12 crc kubenswrapper[5116]: I1212 16:16:12.093433 5116 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 16:16:12 crc kubenswrapper[5116]: I1212 16:16:12.851002 5116 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 16:16:13 crc kubenswrapper[5116]: I1212 16:16:13.009758 5116 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 16:11:12 +0000 UTC" deadline="2026-01-06 11:33:58.129381706 +0000 UTC" Dec 12 16:16:13 crc kubenswrapper[5116]: I1212 16:16:13.009842 5116 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="595h17m45.11954398s" Dec 12 16:16:14 crc kubenswrapper[5116]: I1212 16:16:14.044785 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:14 crc kubenswrapper[5116]: I1212 16:16:14.045973 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:14 crc kubenswrapper[5116]: I1212 16:16:14.046064 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:14 crc kubenswrapper[5116]: I1212 16:16:14.046080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:14 crc kubenswrapper[5116]: E1212 16:16:14.046861 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:14 crc kubenswrapper[5116]: I1212 16:16:14.047281 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:16:14 crc kubenswrapper[5116]: E1212 16:16:14.047638 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:16 crc kubenswrapper[5116]: E1212 16:16:16.089863 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.913081 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.914389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.914552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.914587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.914813 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.928664 5116 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.929196 5116 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 16:16:17 crc kubenswrapper[5116]: E1212 16:16:17.929239 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.934616 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.934706 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.934737 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.934774 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.934801 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:17Z","lastTransitionTime":"2025-12-12T16:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:17 crc kubenswrapper[5116]: E1212 16:16:17.960753 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.977252 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.977330 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.977350 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.977372 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:17 crc kubenswrapper[5116]: I1212 16:16:17.977395 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:17Z","lastTransitionTime":"2025-12-12T16:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:17 crc kubenswrapper[5116]: E1212 16:16:17.994565 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.005005 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.005059 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.005071 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.005090 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.005117 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.016842 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.026711 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.026757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.026768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.026783 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.026794 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.041564 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.041791 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.041828 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.142571 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.242753 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.343261 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5116]: E1212 16:16:18.444036 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.465564 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.470991 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.481886 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.545867 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.545929 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.545947 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.545968 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.545979 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.582402 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.648505 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.648588 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.648605 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.648639 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.648655 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.684320 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.750702 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.750779 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.750798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.750824 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.750841 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.784851 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.853941 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.853988 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.853998 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.854016 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.854027 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.957046 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.957133 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.957146 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.957163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.957178 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5116]: I1212 16:16:18.991868 5116 apiserver.go:52] "Watching apiserver" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.000670 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.001576 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xxzkd","openshift-multus/network-metrics-daemon-gbh7p","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-84wvk","openshift-multus/multus-bphkq","openshift-network-operator/iptables-alerter-5jnd7","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-plb9v","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-fg2lh","openshift-machine-config-operator/machine-config-daemon-bb58t","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw"] Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.002947 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.003628 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.003806 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.004755 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.005215 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.005326 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.005230 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.005772 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.005787 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.006553 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.006611 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.006948 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.010473 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.010720 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.011546 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.011654 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.011804 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.012133 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.024637 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.029497 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.031768 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.032259 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.032399 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.032612 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.033051 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.034079 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.035582 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.039141 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.039225 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.039176 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.039799 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.042428 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.045201 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.046100 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.049427 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.052546 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.053034 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.053179 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.053179 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.053372 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.055183 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.055327 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.058397 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059788 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059843 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059861 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059905 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.059984 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.060542 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.060607 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.060685 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.061284 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.061339 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.061612 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.062409 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.064564 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.064789 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.065001 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.065011 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.066064 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.066267 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.067566 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.068932 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.069391 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.069570 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.073149 5116 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.083122 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.097559 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.112547 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.128513 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.140928 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.140995 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141023 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141048 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141072 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141091 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141139 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141159 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141201 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141251 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141346 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141367 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141386 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141403 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141423 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141452 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141477 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141507 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141529 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141548 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141567 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141588 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141605 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141626 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141644 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141661 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141679 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141696 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.141713 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.142537 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.142619 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.142899 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.142953 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.143367 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.143377 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.143590 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.143841 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.143827 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144209 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144215 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144342 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144367 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144391 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144417 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144518 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144551 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144576 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144598 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144622 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144640 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144671 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144737 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144787 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144823 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144834 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144823 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144903 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144931 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144958 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144982 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145005 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145034 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145059 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145084 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145131 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145167 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145191 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145221 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145533 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145589 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145621 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146009 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146076 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146198 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146217 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146266 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146291 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146313 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146371 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146922 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146955 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.144825 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145343 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145403 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145881 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.145981 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146309 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146739 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146775 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146836 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.146993 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.147407 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.147455 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.147563 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.148020 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.148254 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.648227731 +0000 UTC m=+74.112439477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.148000 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.148620 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.148675 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.149207 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.149504 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.149519 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.149632 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.149903 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.150148 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.150361 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.150430 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.150717 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.150781 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151175 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151308 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151461 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151478 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151493 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151571 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151813 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151936 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.151953 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152058 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152142 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152179 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152207 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152233 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152258 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152285 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152310 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152334 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152356 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152387 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152411 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152437 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152466 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152494 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152518 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152543 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152573 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152542 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152603 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.152827 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153131 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153134 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153182 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153241 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153468 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153707 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153726 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.153894 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154093 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154131 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154191 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154285 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154443 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154441 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154567 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154637 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.154952 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155321 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155703 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155821 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155856 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155879 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155902 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155920 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155945 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155948 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155964 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.155985 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156004 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156024 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156044 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156069 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156089 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156128 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156146 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156164 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156197 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156213 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156228 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156246 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156262 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156280 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.156298 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.157427 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.157748 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.157798 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.158484 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.158591 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.158839 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159131 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159167 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159316 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159346 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159514 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159810 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.159872 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.158633 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.160168 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.160241 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.160647 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.163859 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.163909 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.164383 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.164631 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165405 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165648 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165872 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165927 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165956 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.165982 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166007 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166033 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166066 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166126 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166151 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166173 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166195 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166213 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166233 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166258 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166278 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166296 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166334 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166354 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166376 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166400 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166421 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166444 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166465 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166483 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166504 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166527 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166546 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166566 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166588 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166613 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166630 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166647 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166667 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166685 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166702 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166719 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166743 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166762 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166779 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166800 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166844 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166869 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166961 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166972 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.166964 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167027 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167281 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167314 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167337 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167359 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167378 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167399 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167431 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167449 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167468 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167487 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167507 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167531 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167553 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167577 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167598 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167616 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167637 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167659 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167676 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167702 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167740 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167758 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167776 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167797 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167815 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167834 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167853 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167856 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167874 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167934 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167961 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.167984 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168010 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168032 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168056 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168048 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168280 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168077 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168480 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168537 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168604 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168642 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168671 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168795 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168864 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168896 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168943 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168940 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168946 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169312 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.168200 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169487 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169529 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169545 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169556 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.169646 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.170003 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.170422 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.170655 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.170974 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171089 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171399 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171517 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171605 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171506 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171765 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171792 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171821 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.171805 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.172273 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.172523 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.172667 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.173094 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.173146 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.173153 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.173225 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174062 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174271 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174511 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174518 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174754 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174769 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174806 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.174904 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175128 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175174 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175251 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175260 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175273 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175570 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175588 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175639 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175743 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.175878 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176069 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176227 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176419 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176529 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176638 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176670 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176762 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176792 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176988 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177022 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177049 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177084 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177134 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177179 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177252 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177247 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177280 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177449 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177486 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177520 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178000 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e0adf1a1-3140-410d-a33a-79b360ff4362-hosts-file\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178028 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5lvv\" (UniqueName: \"kubernetes.io/projected/e0adf1a1-3140-410d-a33a-79b360ff4362-kube-api-access-k5lvv\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178065 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178094 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178219 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178253 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-system-cni-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178283 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-k8s-cni-cncf-io\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178309 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-multus\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178337 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8fedd19a-ed2a-4e65-a3ad-e104203261fe-rootfs\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178363 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-str5m\" (UniqueName: \"kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178399 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178439 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwxf\" (UniqueName: \"kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178465 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-cnibin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178491 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-multus-daemon-config\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178518 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178542 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178563 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-cni-binary-copy\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178586 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-kubelet\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178611 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fedd19a-ed2a-4e65-a3ad-e104203261fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178635 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sld\" (UniqueName: \"kubernetes.io/projected/8fedd19a-ed2a-4e65-a3ad-e104203261fe-kube-api-access-z5sld\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178658 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178690 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178717 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178770 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178796 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-system-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178821 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-etc-kubernetes\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178852 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178880 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178915 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178943 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178970 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.179137 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-conf-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.180459 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183836 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-hostroot\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183883 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqphd\" (UniqueName: \"kubernetes.io/projected/af830c5e-c623-45f9-978d-bab9a3fdbd6c-kube-api-access-gqphd\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.176830 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183920 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183986 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184138 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184205 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-bin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184234 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184292 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184743 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184778 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184799 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wdg\" (UniqueName: \"kubernetes.io/projected/814309ea-c9dc-4630-acd2-43b66b028bd5-kube-api-access-82wdg\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184819 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-netns\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185191 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af830c5e-c623-45f9-978d-bab9a3fdbd6c-serviceca\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185212 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185233 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185254 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-cnibin\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177632 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.177849 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178021 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.178412 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.179137 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.179342 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.179631 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.180352 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.180406 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.180798 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.180867 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.181385 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.181966 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185358 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.181984 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.182100 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.182426 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.182488 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.182742 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183035 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183150 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183374 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183561 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.183658 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183664 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183746 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183887 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.183885 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184158 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184421 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.184711 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.185230 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186164 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186476 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-binary-copy\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186506 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186557 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-socket-dir-parent\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmd6\" (UniqueName: \"kubernetes.io/projected/eb955636-d9f0-41af-b498-6d380bb8ad2f-kube-api-access-wlmd6\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186623 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0adf1a1-3140-410d-a33a-79b360ff4362-tmp-dir\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186648 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187046 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.186575 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187061 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.187193 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.187268 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.686670953 +0000 UTC m=+74.150882709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187330 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-os-release\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187367 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af830c5e-c623-45f9-978d-bab9a3fdbd6c-host\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187420 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.187454 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.687438415 +0000 UTC m=+74.151650171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187493 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187518 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187541 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-os-release\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187570 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-multus-certs\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187595 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fedd19a-ed2a-4e65-a3ad-e104203261fe-proxy-tls\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187616 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187640 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187660 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187709 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187732 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.187759 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlv5q\" (UniqueName: \"kubernetes.io/projected/0e71d710-0829-4655-b88f-9318b9776228-kube-api-access-rlv5q\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.188952 5116 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189027 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189185 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189229 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189438 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189464 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189488 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189511 5116 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189535 5116 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189555 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189588 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189608 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189627 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189645 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189664 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189687 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189704 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189721 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189738 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189757 5116 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189774 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189791 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189809 5116 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189827 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189846 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189867 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189884 5116 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189903 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.189974 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190136 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190175 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190229 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190252 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190292 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190310 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190336 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190370 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190404 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190541 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190744 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190851 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190927 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.190829 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192576 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192830 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192902 5116 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192918 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192934 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192947 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192961 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192976 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.192991 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193006 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193024 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193041 5116 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193076 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193091 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193120 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193133 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193145 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193158 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193170 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193302 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193321 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193335 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193348 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193360 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193374 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193386 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193401 5116 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193414 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193463 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193507 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193528 5116 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193545 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193558 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.193571 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194386 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194448 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194421 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194467 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194482 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194492 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194503 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194517 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194528 5116 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194539 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194549 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194559 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194570 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194583 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194595 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194607 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194622 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194635 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194648 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194660 5116 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194672 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194682 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194691 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194702 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194742 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194755 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194764 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195343 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195537 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.194774 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195687 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195703 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195717 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195730 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195744 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195760 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195773 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195787 5116 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195804 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195819 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195812 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195834 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195848 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195858 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195869 5116 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195879 5116 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195890 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195903 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195917 5116 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195933 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195948 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195961 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195975 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.195988 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196003 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196015 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196025 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196036 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196046 5116 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196055 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196065 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196075 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196087 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196096 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196123 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196132 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196142 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196151 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196159 5116 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196169 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196179 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196188 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196197 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196207 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196217 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196227 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196237 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196248 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196257 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196266 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.196275 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.205027 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.206137 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.206169 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.206183 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.206247 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.706232129 +0000 UTC m=+74.170443885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.206578 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.208437 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.208745 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.208783 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.208817 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.209385 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.209391 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.209703 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.210320 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.210823 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.211707 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.211829 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.211144 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.211995 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.212678 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.212906 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.212928 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.213048 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.213191 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.213465 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.213789 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.213813 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.213937 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.213958 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.213972 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.214001 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.214062 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.714028299 +0000 UTC m=+74.178240235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.215162 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.215168 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.214066 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.215813 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.215937 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.219091 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.219282 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.220330 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.220442 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.220563 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.220636 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.221241 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.221643 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.223657 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.224268 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.225636 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.227098 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.229049 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.231227 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.243883 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.252213 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.254505 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.256717 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.266321 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.270328 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.279998 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.280049 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.280065 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.280085 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.280097 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.283386 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.296789 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.296842 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-hostroot\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.296820 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.296868 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqphd\" (UniqueName: \"kubernetes.io/projected/af830c5e-c623-45f9-978d-bab9a3fdbd6c-kube-api-access-gqphd\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297047 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-bin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297148 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297168 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297191 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297217 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297240 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82wdg\" (UniqueName: \"kubernetes.io/projected/814309ea-c9dc-4630-acd2-43b66b028bd5-kube-api-access-82wdg\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297258 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-netns\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297266 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297284 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af830c5e-c623-45f9-978d-bab9a3fdbd6c-serviceca\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297312 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-hostroot\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297315 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297352 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297369 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-cnibin\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297390 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-binary-copy\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297412 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297434 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-socket-dir-parent\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297454 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmd6\" (UniqueName: \"kubernetes.io/projected/eb955636-d9f0-41af-b498-6d380bb8ad2f-kube-api-access-wlmd6\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297484 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0adf1a1-3140-410d-a33a-79b360ff4362-tmp-dir\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297511 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-os-release\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297558 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af830c5e-c623-45f9-978d-bab9a3fdbd6c-host\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297577 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297594 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297614 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297631 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-os-release\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297653 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-multus-certs\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297670 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fedd19a-ed2a-4e65-a3ad-e104203261fe-proxy-tls\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297688 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297705 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297723 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297745 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297762 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297780 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlv5q\" (UniqueName: \"kubernetes.io/projected/0e71d710-0829-4655-b88f-9318b9776228-kube-api-access-rlv5q\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297801 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297818 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297837 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297866 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e0adf1a1-3140-410d-a33a-79b360ff4362-hosts-file\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297885 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5lvv\" (UniqueName: \"kubernetes.io/projected/e0adf1a1-3140-410d-a33a-79b360ff4362-kube-api-access-k5lvv\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297904 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297931 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-system-cni-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297952 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-k8s-cni-cncf-io\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297975 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-multus\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298020 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8fedd19a-ed2a-4e65-a3ad-e104203261fe-rootfs\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298046 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-str5m\" (UniqueName: \"kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298073 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298095 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwxf\" (UniqueName: \"kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302796 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-cnibin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302838 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-multus-daemon-config\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302863 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302894 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302928 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-cni-binary-copy\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302953 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-kubelet\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302986 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fedd19a-ed2a-4e65-a3ad-e104203261fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303014 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sld\" (UniqueName: \"kubernetes.io/projected/8fedd19a-ed2a-4e65-a3ad-e104203261fe-kube-api-access-z5sld\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303036 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303078 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303135 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303230 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-system-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299576 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-netns\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299573 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299650 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.300330 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.300476 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.300521 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-socket-dir-parent\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.300593 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af830c5e-c623-45f9-978d-bab9a3fdbd6c-serviceca\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.302733 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-os-release\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.298348 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.303537 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:19.803511253 +0000 UTC m=+74.267723009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.304578 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-cnibin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298386 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298421 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-cnibin\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.297369 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298652 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.298682 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-bin\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299254 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/814309ea-c9dc-4630-acd2-43b66b028bd5-cni-binary-copy\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299407 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af830c5e-c623-45f9-978d-bab9a3fdbd6c-host\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299477 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299477 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299540 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.299564 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.307145 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.307725 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.307993 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0adf1a1-3140-410d-a33a-79b360ff4362-tmp-dir\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.303169 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-system-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312280 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-os-release\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312349 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312400 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312549 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-multus-daemon-config\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312680 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-kubelet\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.312950 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e71d710-0829-4655-b88f-9318b9776228-cni-binary-copy\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.313603 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fedd19a-ed2a-4e65-a3ad-e104203261fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.316313 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fedd19a-ed2a-4e65-a3ad-e104203261fe-proxy-tls\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.316616 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.316679 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.316925 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqphd\" (UniqueName: \"kubernetes.io/projected/af830c5e-c623-45f9-978d-bab9a3fdbd6c-kube-api-access-gqphd\") pod \"node-ca-plb9v\" (UID: \"af830c5e-c623-45f9-978d-bab9a3fdbd6c\") " pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.317243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.317367 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.317557 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-multus-certs\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.317934 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-cni-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318180 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318247 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318285 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318448 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e0adf1a1-3140-410d-a33a-79b360ff4362-hosts-file\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318538 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-var-lib-cni-multus\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318723 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318837 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-host-run-k8s-cni-cncf-io\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318814 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/814309ea-c9dc-4630-acd2-43b66b028bd5-system-cni-dir\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.318965 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8fedd19a-ed2a-4e65-a3ad-e104203261fe-rootfs\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.319023 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.319255 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-etc-kubernetes\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.319429 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.320316 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-conf-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.320590 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.320628 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.319537 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-etc-kubernetes\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321002 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e71d710-0829-4655-b88f-9318b9776228-multus-conf-dir\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.319550 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321646 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321714 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321729 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321757 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321771 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321783 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321795 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321807 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321822 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.321833 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322276 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322297 5116 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322335 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322349 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322363 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322377 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322418 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322428 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322441 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322454 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322465 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322493 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322503 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322519 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322533 5116 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322563 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322603 5116 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322622 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322652 5116 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322662 5116 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322675 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322686 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322695 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322721 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322738 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322766 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322777 5116 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322805 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322817 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322827 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322839 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322851 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322877 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322888 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.322898 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323118 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323133 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323146 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323189 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323205 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323218 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323228 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323305 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323335 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323345 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323354 5116 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323366 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323376 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323386 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323522 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323542 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323552 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323562 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323571 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323621 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323634 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323644 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323660 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323696 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323710 5116 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323723 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323742 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323800 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323812 5116 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323824 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323841 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323871 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.323883 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.324714 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmd6\" (UniqueName: \"kubernetes.io/projected/eb955636-d9f0-41af-b498-6d380bb8ad2f-kube-api-access-wlmd6\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.326895 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wdg\" (UniqueName: \"kubernetes.io/projected/814309ea-c9dc-4630-acd2-43b66b028bd5-kube-api-access-82wdg\") pod \"multus-additional-cni-plugins-84wvk\" (UID: \"814309ea-c9dc-4630-acd2-43b66b028bd5\") " pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.327496 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.334020 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.336598 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-str5m\" (UniqueName: \"kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m\") pod \"ovnkube-control-plane-57b78d8988-fl6jw\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.336906 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sld\" (UniqueName: \"kubernetes.io/projected/8fedd19a-ed2a-4e65-a3ad-e104203261fe-kube-api-access-z5sld\") pod \"machine-config-daemon-bb58t\" (UID: \"8fedd19a-ed2a-4e65-a3ad-e104203261fe\") " pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.338946 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5lvv\" (UniqueName: \"kubernetes.io/projected/e0adf1a1-3140-410d-a33a-79b360ff4362-kube-api-access-k5lvv\") pod \"node-resolver-xxzkd\" (UID: \"e0adf1a1-3140-410d-a33a-79b360ff4362\") " pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.339063 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.339186 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwxf\" (UniqueName: \"kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf\") pod \"ovnkube-node-fg2lh\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.339508 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.341927 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlv5q\" (UniqueName: \"kubernetes.io/projected/0e71d710-0829-4655-b88f-9318b9776228-kube-api-access-rlv5q\") pod \"multus-bphkq\" (UID: \"0e71d710-0829-4655-b88f-9318b9776228\") " pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.349180 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.352200 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.352275 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:19 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: source /etc/kubernetes/apiserver-url.env Dec 12 16:16:19 crc kubenswrapper[5116]: else Dec 12 16:16:19 crc kubenswrapper[5116]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 12 16:16:19 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.352744 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.353615 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.355388 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-84wvk" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.358988 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:19 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 12 16:16:19 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 12 16:16:19 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 12 16:16:19 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 12 16:16:19 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 12 16:16:19 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 12 16:16:19 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:19 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 12 16:16:19 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${ho_enable} \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:19 crc kubenswrapper[5116]: --disable-approver \ Dec 12 16:16:19 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 12 16:16:19 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.367927 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.368316 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xxzkd" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.371281 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:19 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 12 16:16:19 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:19 crc kubenswrapper[5116]: --disable-webhook \ Dec 12 16:16:19 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.372479 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.372587 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.374248 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.378817 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bphkq" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.379391 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.381226 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814309ea_c9dc_4630_acd2_43b66b028bd5.slice/crio-6a6c474b113eeecb576bd2d96772a4b6616d0f2d1d6f0f2c131aea6d02dffe0c WatchSource:0}: Error finding container 6a6c474b113eeecb576bd2d96772a4b6616d0f2d1d6f0f2c131aea6d02dffe0c: Status 404 returned error can't find the container with id 6a6c474b113eeecb576bd2d96772a4b6616d0f2d1d6f0f2c131aea6d02dffe0c Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.385101 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-plb9v" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.386951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.386988 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.387002 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.387024 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.387037 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.390826 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.393701 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-84wvk_openshift-multus(814309ea-c9dc-4630-acd2-43b66b028bd5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.395082 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-84wvk" podUID="814309ea-c9dc-4630-acd2-43b66b028bd5" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.396018 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:19 crc kubenswrapper[5116]: set -uo pipefail Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 12 16:16:19 crc kubenswrapper[5116]: HOSTS_FILE="/etc/hosts" Dec 12 16:16:19 crc kubenswrapper[5116]: TEMP_FILE="/tmp/hosts.tmp" Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # Make a temporary file with the old hosts file's attributes. Dec 12 16:16:19 crc kubenswrapper[5116]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 12 16:16:19 crc kubenswrapper[5116]: echo "Failed to preserve hosts file. Exiting." Dec 12 16:16:19 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: while true; do Dec 12 16:16:19 crc kubenswrapper[5116]: declare -A svc_ips Dec 12 16:16:19 crc kubenswrapper[5116]: for svc in "${services[@]}"; do Dec 12 16:16:19 crc kubenswrapper[5116]: # Fetch service IP from cluster dns if present. We make several tries Dec 12 16:16:19 crc kubenswrapper[5116]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 12 16:16:19 crc kubenswrapper[5116]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 12 16:16:19 crc kubenswrapper[5116]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 12 16:16:19 crc kubenswrapper[5116]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:19 crc kubenswrapper[5116]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:19 crc kubenswrapper[5116]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:19 crc kubenswrapper[5116]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 12 16:16:19 crc kubenswrapper[5116]: for i in ${!cmds[*]} Dec 12 16:16:19 crc kubenswrapper[5116]: do Dec 12 16:16:19 crc kubenswrapper[5116]: ips=($(eval "${cmds[i]}")) Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: svc_ips["${svc}"]="${ips[@]}" Dec 12 16:16:19 crc kubenswrapper[5116]: break Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # Update /etc/hosts only if we get valid service IPs Dec 12 16:16:19 crc kubenswrapper[5116]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 12 16:16:19 crc kubenswrapper[5116]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 12 16:16:19 crc kubenswrapper[5116]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 12 16:16:19 crc kubenswrapper[5116]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 12 16:16:19 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:19 crc kubenswrapper[5116]: continue Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # Append resolver entries for services Dec 12 16:16:19 crc kubenswrapper[5116]: rc=0 Dec 12 16:16:19 crc kubenswrapper[5116]: for svc in "${!svc_ips[@]}"; do Dec 12 16:16:19 crc kubenswrapper[5116]: for ip in ${svc_ips[${svc}]}; do Dec 12 16:16:19 crc kubenswrapper[5116]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ $rc -ne 0 ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:19 crc kubenswrapper[5116]: continue Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 12 16:16:19 crc kubenswrapper[5116]: # Replace /etc/hosts with our modified version if needed Dec 12 16:16:19 crc kubenswrapper[5116]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 12 16:16:19 crc kubenswrapper[5116]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:19 crc kubenswrapper[5116]: unset svc_ips Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5lvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xxzkd_openshift-dns(e0adf1a1-3140-410d-a33a-79b360ff4362): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.398414 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xxzkd" podUID="e0adf1a1-3140-410d-a33a-79b360ff4362" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.402957 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.402965 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e71d710_0829_4655_b88f_9318b9776228.slice/crio-d10f0781fc8398f18f63895bf9f4ea70d0b8e866b68dcee6b5655798a455b7e6 WatchSource:0}: Error finding container d10f0781fc8398f18f63895bf9f4ea70d0b8e866b68dcee6b5655798a455b7e6: Status 404 returned error can't find the container with id d10f0781fc8398f18f63895bf9f4ea70d0b8e866b68dcee6b5655798a455b7e6 Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.409717 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 12 16:16:19 crc kubenswrapper[5116]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 12 16:16:19 crc kubenswrapper[5116]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlv5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bphkq_openshift-multus(0e71d710-0829-4655-b88f-9318b9776228): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.410011 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.410416 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf830c5e_c623_45f9_978d_bab9a3fdbd6c.slice/crio-1e2a740b7bd0af45cd182aea51d86b6e25c999fb58e1edaadae614e871a84b41 WatchSource:0}: Error finding container 1e2a740b7bd0af45cd182aea51d86b6e25c999fb58e1edaadae614e871a84b41: Status 404 returned error can't find the container with id 1e2a740b7bd0af45cd182aea51d86b6e25c999fb58e1edaadae614e871a84b41 Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.411488 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bphkq" podUID="0e71d710-0829-4655-b88f-9318b9776228" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.414543 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.416641 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789dbc62_9a37_4521_89a5_476e80e7beb6.slice/crio-50a67f3807b20fb39764c234f4968121e7ec8b83d8be1ff90efe3027e07e98c6 WatchSource:0}: Error finding container 50a67f3807b20fb39764c234f4968121e7ec8b83d8be1ff90efe3027e07e98c6: Status 404 returned error can't find the container with id 50a67f3807b20fb39764c234f4968121e7ec8b83d8be1ff90efe3027e07e98c6 Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.416745 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 12 16:16:19 crc kubenswrapper[5116]: while [ true ]; Dec 12 16:16:19 crc kubenswrapper[5116]: do Dec 12 16:16:19 crc kubenswrapper[5116]: for f in $(ls /tmp/serviceca); do Dec 12 16:16:19 crc kubenswrapper[5116]: echo $f Dec 12 16:16:19 crc kubenswrapper[5116]: ca_file_path="/tmp/serviceca/${f}" Dec 12 16:16:19 crc kubenswrapper[5116]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 12 16:16:19 crc kubenswrapper[5116]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 12 16:16:19 crc kubenswrapper[5116]: if [ -e "${reg_dir_path}" ]; then Dec 12 16:16:19 crc kubenswrapper[5116]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:19 crc kubenswrapper[5116]: else Dec 12 16:16:19 crc kubenswrapper[5116]: mkdir $reg_dir_path Dec 12 16:16:19 crc kubenswrapper[5116]: cp $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: for d in $(ls /etc/docker/certs.d); do Dec 12 16:16:19 crc kubenswrapper[5116]: echo $d Dec 12 16:16:19 crc kubenswrapper[5116]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 12 16:16:19 crc kubenswrapper[5116]: reg_conf_path="/tmp/serviceca/${dp}" Dec 12 16:16:19 crc kubenswrapper[5116]: if [ ! -e "${reg_conf_path}" ]; then Dec 12 16:16:19 crc kubenswrapper[5116]: rm -rf /etc/docker/certs.d/$d Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: sleep 60 & wait ${!} Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqphd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plb9v_openshift-image-registry(af830c5e-c623-45f9-978d-bab9a3fdbd6c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.418219 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plb9v" podUID="af830c5e-c623-45f9-978d-bab9a3fdbd6c" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.421467 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.422966 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 12 16:16:19 crc kubenswrapper[5116]: apiVersion: v1 Dec 12 16:16:19 crc kubenswrapper[5116]: clusters: Dec 12 16:16:19 crc kubenswrapper[5116]: - cluster: Dec 12 16:16:19 crc kubenswrapper[5116]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 12 16:16:19 crc kubenswrapper[5116]: server: https://api-int.crc.testing:6443 Dec 12 16:16:19 crc kubenswrapper[5116]: name: default-cluster Dec 12 16:16:19 crc kubenswrapper[5116]: contexts: Dec 12 16:16:19 crc kubenswrapper[5116]: - context: Dec 12 16:16:19 crc kubenswrapper[5116]: cluster: default-cluster Dec 12 16:16:19 crc kubenswrapper[5116]: namespace: default Dec 12 16:16:19 crc kubenswrapper[5116]: user: default-auth Dec 12 16:16:19 crc kubenswrapper[5116]: name: default-context Dec 12 16:16:19 crc kubenswrapper[5116]: current-context: default-context Dec 12 16:16:19 crc kubenswrapper[5116]: kind: Config Dec 12 16:16:19 crc kubenswrapper[5116]: preferences: {} Dec 12 16:16:19 crc kubenswrapper[5116]: users: Dec 12 16:16:19 crc kubenswrapper[5116]: - name: default-auth Dec 12 16:16:19 crc kubenswrapper[5116]: user: Dec 12 16:16:19 crc kubenswrapper[5116]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:19 crc kubenswrapper[5116]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:19 crc kubenswrapper[5116]: EOF Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fg2lh_openshift-ovn-kubernetes(789dbc62-9a37-4521-89a5-476e80e7beb6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.424036 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.434980 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fedd19a_ed2a_4e65_a3ad_e104203261fe.slice/crio-866e108768512c3a911d05b16522ad6589c006c21ed2d8bd7ddc5c97fab1e61f WatchSource:0}: Error finding container 866e108768512c3a911d05b16522ad6589c006c21ed2d8bd7ddc5c97fab1e61f: Status 404 returned error can't find the container with id 866e108768512c3a911d05b16522ad6589c006c21ed2d8bd7ddc5c97fab1e61f Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.437277 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.440965 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.442256 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" Dec 12 16:16:19 crc kubenswrapper[5116]: W1212 16:16:19.442841 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3252cf25_4bc0_4262_923c_20bb5a19f1cb.slice/crio-0a618221781dd879f5453e177b4a81c2b41d0a2aba7e6c00bf515c3c346b7df3 WatchSource:0}: Error finding container 0a618221781dd879f5453e177b4a81c2b41d0a2aba7e6c00bf515c3c346b7df3: Status 404 returned error can't find the container with id 0a618221781dd879f5453e177b4a81c2b41d0a2aba7e6c00bf515c3c346b7df3 Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.445639 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:19 crc kubenswrapper[5116]: set -euo pipefail Dec 12 16:16:19 crc kubenswrapper[5116]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 12 16:16:19 crc kubenswrapper[5116]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 12 16:16:19 crc kubenswrapper[5116]: # As the secret mount is optional we must wait for the files to be present. Dec 12 16:16:19 crc kubenswrapper[5116]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 12 16:16:19 crc kubenswrapper[5116]: TS=$(date +%s) Dec 12 16:16:19 crc kubenswrapper[5116]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 12 16:16:19 crc kubenswrapper[5116]: HAS_LOGGED_INFO=0 Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: log_missing_certs(){ Dec 12 16:16:19 crc kubenswrapper[5116]: CUR_TS=$(date +%s) Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 12 16:16:19 crc kubenswrapper[5116]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 12 16:16:19 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 12 16:16:19 crc kubenswrapper[5116]: HAS_LOGGED_INFO=1 Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: } Dec 12 16:16:19 crc kubenswrapper[5116]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 12 16:16:19 crc kubenswrapper[5116]: log_missing_certs Dec 12 16:16:19 crc kubenswrapper[5116]: sleep 5 Dec 12 16:16:19 crc kubenswrapper[5116]: done Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 12 16:16:19 crc kubenswrapper[5116]: exec /usr/bin/kube-rbac-proxy \ Dec 12 16:16:19 crc kubenswrapper[5116]: --logtostderr \ Dec 12 16:16:19 crc kubenswrapper[5116]: --secure-listen-address=:9108 \ Dec 12 16:16:19 crc kubenswrapper[5116]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 12 16:16:19 crc kubenswrapper[5116]: --upstream=http://127.0.0.1:29108/ \ Dec 12 16:16:19 crc kubenswrapper[5116]: --tls-private-key-file=${TLS_PK} \ Dec 12 16:16:19 crc kubenswrapper[5116]: --tls-cert-file=${TLS_CERT} Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.448737 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:19 crc kubenswrapper[5116]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:19 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # This is needed so that converting clusters from GA to TP Dec 12 16:16:19 crc kubenswrapper[5116]: # will rollout control plane pods as well Dec 12 16:16:19 crc kubenswrapper[5116]: network_segmentation_enabled_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: multi_network_enabled_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "true" != "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: route_advertisements_enable_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # Enable multi-network policy if configured (control-plane always full mode) Dec 12 16:16:19 crc kubenswrapper[5116]: multi_network_policy_enabled_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: # Enable admin network policy if configured (control-plane always full mode) Dec 12 16:16:19 crc kubenswrapper[5116]: admin_network_policy_enabled_flag= Dec 12 16:16:19 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:19 crc kubenswrapper[5116]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: if [ "shared" == "shared" ]; then Dec 12 16:16:19 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode shared" Dec 12 16:16:19 crc kubenswrapper[5116]: elif [ "shared" == "local" ]; then Dec 12 16:16:19 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode local" Dec 12 16:16:19 crc kubenswrapper[5116]: else Dec 12 16:16:19 crc kubenswrapper[5116]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 12 16:16:19 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:19 crc kubenswrapper[5116]: fi Dec 12 16:16:19 crc kubenswrapper[5116]: Dec 12 16:16:19 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 12 16:16:19 crc kubenswrapper[5116]: exec /usr/bin/ovnkube \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:19 crc kubenswrapper[5116]: --init-cluster-manager "${K8S_NODE}" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 12 16:16:19 crc kubenswrapper[5116]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --metrics-bind-address "127.0.0.1:29108" \ Dec 12 16:16:19 crc kubenswrapper[5116]: --metrics-enable-pprof \ Dec 12 16:16:19 crc kubenswrapper[5116]: --metrics-enable-config-duration \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${ovn_v4_join_subnet_opt} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${ovn_v6_join_subnet_opt} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${dns_name_resolver_enabled_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${persistent_ips_enabled_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${multi_network_enabled_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${network_segmentation_enabled_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${gateway_mode_flags} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${route_advertisements_enable_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${preconfigured_udn_addresses_enable_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-egress-ip=true \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-egress-firewall=true \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-egress-qos=true \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-egress-service=true \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-multicast \ Dec 12 16:16:19 crc kubenswrapper[5116]: --enable-multi-external-gateway=true \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${multi_network_policy_enabled_flag} \ Dec 12 16:16:19 crc kubenswrapper[5116]: ${admin_network_policy_enabled_flag} Dec 12 16:16:19 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:19 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.450099 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.489884 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.489937 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.489947 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.489965 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.489975 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.592395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.592466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.592481 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.592500 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.592533 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.694892 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.694944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.694956 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.694973 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.694984 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.727952 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.728067 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.728089 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.728156 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.728174 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728386 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728432 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728470 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.728454209 +0000 UTC m=+75.192665965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728484 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.72847843 +0000 UTC m=+75.192690186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728496 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.72849043 +0000 UTC m=+75.192702186 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728520 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728515 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728563 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728535 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728581 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728586 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728631 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.728618854 +0000 UTC m=+75.192830610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.728649 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.728639944 +0000 UTC m=+75.192851700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.797875 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.797954 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.797965 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.797980 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.798007 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.828906 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.829077 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: E1212 16:16:19.829185 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:20.829163145 +0000 UTC m=+75.293374901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.900609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.900675 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.900686 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.900705 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:19 crc kubenswrapper[5116]: I1212 16:16:19.900732 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:19Z","lastTransitionTime":"2025-12-12T16:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.003896 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.003975 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.003997 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.004022 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.004052 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.052863 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.054762 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.057306 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.059199 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.062546 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.065516 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.069350 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.072436 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.073810 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.076955 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.081943 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.083277 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.084352 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.086410 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.086983 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.088330 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.089177 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.090833 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.092610 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.093951 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.095438 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.097468 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.098253 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.099303 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.100710 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.101789 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.103128 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.103907 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.106806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.107134 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.107324 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.106881 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.107519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.107665 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.109334 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.111996 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.115564 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.119170 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.121289 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.124222 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.125527 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.127191 5116 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.127427 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.134082 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.137138 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.139020 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.142339 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.143561 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.146241 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.147679 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.148689 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.151331 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.153427 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.155882 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.157766 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.160234 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.167993 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.169198 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.171313 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.173548 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.174694 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.176579 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.177802 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.209528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.209585 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.209599 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.209615 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.209627 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.311902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.312000 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.312020 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.312048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.312072 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.364127 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"866e108768512c3a911d05b16522ad6589c006c21ed2d8bd7ddc5c97fab1e61f"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.365517 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"50a67f3807b20fb39764c234f4968121e7ec8b83d8be1ff90efe3027e07e98c6"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.366534 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.367470 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 12 16:16:20 crc kubenswrapper[5116]: apiVersion: v1 Dec 12 16:16:20 crc kubenswrapper[5116]: clusters: Dec 12 16:16:20 crc kubenswrapper[5116]: - cluster: Dec 12 16:16:20 crc kubenswrapper[5116]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 12 16:16:20 crc kubenswrapper[5116]: server: https://api-int.crc.testing:6443 Dec 12 16:16:20 crc kubenswrapper[5116]: name: default-cluster Dec 12 16:16:20 crc kubenswrapper[5116]: contexts: Dec 12 16:16:20 crc kubenswrapper[5116]: - context: Dec 12 16:16:20 crc kubenswrapper[5116]: cluster: default-cluster Dec 12 16:16:20 crc kubenswrapper[5116]: namespace: default Dec 12 16:16:20 crc kubenswrapper[5116]: user: default-auth Dec 12 16:16:20 crc kubenswrapper[5116]: name: default-context Dec 12 16:16:20 crc kubenswrapper[5116]: current-context: default-context Dec 12 16:16:20 crc kubenswrapper[5116]: kind: Config Dec 12 16:16:20 crc kubenswrapper[5116]: preferences: {} Dec 12 16:16:20 crc kubenswrapper[5116]: users: Dec 12 16:16:20 crc kubenswrapper[5116]: - name: default-auth Dec 12 16:16:20 crc kubenswrapper[5116]: user: Dec 12 16:16:20 crc kubenswrapper[5116]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:20 crc kubenswrapper[5116]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:20 crc kubenswrapper[5116]: EOF Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fg2lh_openshift-ovn-kubernetes(789dbc62-9a37-4521-89a5-476e80e7beb6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.368575 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.368810 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.369752 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.370911 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.371426 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.372089 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.372980 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerStarted","Data":"0a618221781dd879f5453e177b4a81c2b41d0a2aba7e6c00bf515c3c346b7df3"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.375643 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:20 crc kubenswrapper[5116]: set -euo pipefail Dec 12 16:16:20 crc kubenswrapper[5116]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 12 16:16:20 crc kubenswrapper[5116]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 12 16:16:20 crc kubenswrapper[5116]: # As the secret mount is optional we must wait for the files to be present. Dec 12 16:16:20 crc kubenswrapper[5116]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 12 16:16:20 crc kubenswrapper[5116]: TS=$(date +%s) Dec 12 16:16:20 crc kubenswrapper[5116]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 12 16:16:20 crc kubenswrapper[5116]: HAS_LOGGED_INFO=0 Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: log_missing_certs(){ Dec 12 16:16:20 crc kubenswrapper[5116]: CUR_TS=$(date +%s) Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 12 16:16:20 crc kubenswrapper[5116]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 12 16:16:20 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 12 16:16:20 crc kubenswrapper[5116]: HAS_LOGGED_INFO=1 Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: } Dec 12 16:16:20 crc kubenswrapper[5116]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 12 16:16:20 crc kubenswrapper[5116]: log_missing_certs Dec 12 16:16:20 crc kubenswrapper[5116]: sleep 5 Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 12 16:16:20 crc kubenswrapper[5116]: exec /usr/bin/kube-rbac-proxy \ Dec 12 16:16:20 crc kubenswrapper[5116]: --logtostderr \ Dec 12 16:16:20 crc kubenswrapper[5116]: --secure-listen-address=:9108 \ Dec 12 16:16:20 crc kubenswrapper[5116]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 12 16:16:20 crc kubenswrapper[5116]: --upstream=http://127.0.0.1:29108/ \ Dec 12 16:16:20 crc kubenswrapper[5116]: --tls-private-key-file=${TLS_PK} \ Dec 12 16:16:20 crc kubenswrapper[5116]: --tls-cert-file=${TLS_CERT} Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.377314 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bphkq" event={"ID":"0e71d710-0829-4655-b88f-9318b9776228","Type":"ContainerStarted","Data":"d10f0781fc8398f18f63895bf9f4ea70d0b8e866b68dcee6b5655798a455b7e6"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.377440 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xxzkd" event={"ID":"e0adf1a1-3140-410d-a33a-79b360ff4362","Type":"ContainerStarted","Data":"1c41517379976c21a1a34c6454ee5db0b3a70007cde03549ccfa9a5196b3b86a"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.377738 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:20 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # This is needed so that converting clusters from GA to TP Dec 12 16:16:20 crc kubenswrapper[5116]: # will rollout control plane pods as well Dec 12 16:16:20 crc kubenswrapper[5116]: network_segmentation_enabled_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: multi_network_enabled_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "true" != "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: route_advertisements_enable_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # Enable multi-network policy if configured (control-plane always full mode) Dec 12 16:16:20 crc kubenswrapper[5116]: multi_network_policy_enabled_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # Enable admin network policy if configured (control-plane always full mode) Dec 12 16:16:20 crc kubenswrapper[5116]: admin_network_policy_enabled_flag= Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: if [ "shared" == "shared" ]; then Dec 12 16:16:20 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode shared" Dec 12 16:16:20 crc kubenswrapper[5116]: elif [ "shared" == "local" ]; then Dec 12 16:16:20 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode local" Dec 12 16:16:20 crc kubenswrapper[5116]: else Dec 12 16:16:20 crc kubenswrapper[5116]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 12 16:16:20 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 12 16:16:20 crc kubenswrapper[5116]: exec /usr/bin/ovnkube \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:20 crc kubenswrapper[5116]: --init-cluster-manager "${K8S_NODE}" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 12 16:16:20 crc kubenswrapper[5116]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --metrics-bind-address "127.0.0.1:29108" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --metrics-enable-pprof \ Dec 12 16:16:20 crc kubenswrapper[5116]: --metrics-enable-config-duration \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${ovn_v4_join_subnet_opt} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${ovn_v6_join_subnet_opt} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${dns_name_resolver_enabled_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${persistent_ips_enabled_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${multi_network_enabled_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${network_segmentation_enabled_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${gateway_mode_flags} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${route_advertisements_enable_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${preconfigured_udn_addresses_enable_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-egress-ip=true \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-egress-firewall=true \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-egress-qos=true \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-egress-service=true \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-multicast \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-multi-external-gateway=true \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${multi_network_policy_enabled_flag} \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${admin_network_policy_enabled_flag} Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.378858 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.380260 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 12 16:16:20 crc kubenswrapper[5116]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 12 16:16:20 crc kubenswrapper[5116]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlv5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bphkq_openshift-multus(0e71d710-0829-4655-b88f-9318b9776228): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.380223 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.380871 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerStarted","Data":"6a6c474b113eeecb576bd2d96772a4b6616d0f2d1d6f0f2c131aea6d02dffe0c"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.381425 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bphkq" podUID="0e71d710-0829-4655-b88f-9318b9776228" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.384231 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-84wvk_openshift-multus(814309ea-c9dc-4630-acd2-43b66b028bd5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.385649 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-84wvk" podUID="814309ea-c9dc-4630-acd2-43b66b028bd5" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.385675 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"45c11066dcbe8e527bccf71e5bbfe611114e17826502215b0e98b7fa285ef398"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.386454 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:20 crc kubenswrapper[5116]: set -uo pipefail Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 12 16:16:20 crc kubenswrapper[5116]: HOSTS_FILE="/etc/hosts" Dec 12 16:16:20 crc kubenswrapper[5116]: TEMP_FILE="/tmp/hosts.tmp" Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # Make a temporary file with the old hosts file's attributes. Dec 12 16:16:20 crc kubenswrapper[5116]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 12 16:16:20 crc kubenswrapper[5116]: echo "Failed to preserve hosts file. Exiting." Dec 12 16:16:20 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: while true; do Dec 12 16:16:20 crc kubenswrapper[5116]: declare -A svc_ips Dec 12 16:16:20 crc kubenswrapper[5116]: for svc in "${services[@]}"; do Dec 12 16:16:20 crc kubenswrapper[5116]: # Fetch service IP from cluster dns if present. We make several tries Dec 12 16:16:20 crc kubenswrapper[5116]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 12 16:16:20 crc kubenswrapper[5116]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 12 16:16:20 crc kubenswrapper[5116]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 12 16:16:20 crc kubenswrapper[5116]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:20 crc kubenswrapper[5116]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:20 crc kubenswrapper[5116]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:20 crc kubenswrapper[5116]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 12 16:16:20 crc kubenswrapper[5116]: for i in ${!cmds[*]} Dec 12 16:16:20 crc kubenswrapper[5116]: do Dec 12 16:16:20 crc kubenswrapper[5116]: ips=($(eval "${cmds[i]}")) Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: svc_ips["${svc}"]="${ips[@]}" Dec 12 16:16:20 crc kubenswrapper[5116]: break Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # Update /etc/hosts only if we get valid service IPs Dec 12 16:16:20 crc kubenswrapper[5116]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 12 16:16:20 crc kubenswrapper[5116]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 12 16:16:20 crc kubenswrapper[5116]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 12 16:16:20 crc kubenswrapper[5116]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 12 16:16:20 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:20 crc kubenswrapper[5116]: continue Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # Append resolver entries for services Dec 12 16:16:20 crc kubenswrapper[5116]: rc=0 Dec 12 16:16:20 crc kubenswrapper[5116]: for svc in "${!svc_ips[@]}"; do Dec 12 16:16:20 crc kubenswrapper[5116]: for ip in ${svc_ips[${svc}]}; do Dec 12 16:16:20 crc kubenswrapper[5116]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ $rc -ne 0 ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:20 crc kubenswrapper[5116]: continue Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 12 16:16:20 crc kubenswrapper[5116]: # Replace /etc/hosts with our modified version if needed Dec 12 16:16:20 crc kubenswrapper[5116]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 12 16:16:20 crc kubenswrapper[5116]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:20 crc kubenswrapper[5116]: unset svc_ips Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5lvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xxzkd_openshift-dns(e0adf1a1-3140-410d-a33a-79b360ff4362): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.387367 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-plb9v" event={"ID":"af830c5e-c623-45f9-978d-bab9a3fdbd6c","Type":"ContainerStarted","Data":"1e2a740b7bd0af45cd182aea51d86b6e25c999fb58e1edaadae614e871a84b41"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.387651 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.387647 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xxzkd" podUID="e0adf1a1-3140-410d-a33a-79b360ff4362" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.388183 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"1731f082eef615005ac2e237ab21503ed9feecc359e373928924c7aed5fd1f83"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.388763 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.389325 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:20 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 12 16:16:20 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 12 16:16:20 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 12 16:16:20 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 12 16:16:20 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 12 16:16:20 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 12 16:16:20 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:20 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 12 16:16:20 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 12 16:16:20 crc kubenswrapper[5116]: ${ho_enable} \ Dec 12 16:16:20 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:20 crc kubenswrapper[5116]: --disable-approver \ Dec 12 16:16:20 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 12 16:16:20 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.389342 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"273618f83f51f7816c10b6d2cd7c413bf23d63720bca575d11e004305498d071"} Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.389534 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 12 16:16:20 crc kubenswrapper[5116]: while [ true ]; Dec 12 16:16:20 crc kubenswrapper[5116]: do Dec 12 16:16:20 crc kubenswrapper[5116]: for f in $(ls /tmp/serviceca); do Dec 12 16:16:20 crc kubenswrapper[5116]: echo $f Dec 12 16:16:20 crc kubenswrapper[5116]: ca_file_path="/tmp/serviceca/${f}" Dec 12 16:16:20 crc kubenswrapper[5116]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 12 16:16:20 crc kubenswrapper[5116]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 12 16:16:20 crc kubenswrapper[5116]: if [ -e "${reg_dir_path}" ]; then Dec 12 16:16:20 crc kubenswrapper[5116]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:20 crc kubenswrapper[5116]: else Dec 12 16:16:20 crc kubenswrapper[5116]: mkdir $reg_dir_path Dec 12 16:16:20 crc kubenswrapper[5116]: cp $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: for d in $(ls /etc/docker/certs.d); do Dec 12 16:16:20 crc kubenswrapper[5116]: echo $d Dec 12 16:16:20 crc kubenswrapper[5116]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 12 16:16:20 crc kubenswrapper[5116]: reg_conf_path="/tmp/serviceca/${dp}" Dec 12 16:16:20 crc kubenswrapper[5116]: if [ ! -e "${reg_conf_path}" ]; then Dec 12 16:16:20 crc kubenswrapper[5116]: rm -rf /etc/docker/certs.d/$d Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: sleep 60 & wait ${!} Dec 12 16:16:20 crc kubenswrapper[5116]: done Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqphd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plb9v_openshift-image-registry(af830c5e-c623-45f9-978d-bab9a3fdbd6c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.390308 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:20 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: source /etc/kubernetes/apiserver-url.env Dec 12 16:16:20 crc kubenswrapper[5116]: else Dec 12 16:16:20 crc kubenswrapper[5116]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 12 16:16:20 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.390656 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plb9v" podUID="af830c5e-c623-45f9-978d-bab9a3fdbd6c" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.391359 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.391303 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.391385 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:20 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:20 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:20 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:20 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:20 crc kubenswrapper[5116]: fi Dec 12 16:16:20 crc kubenswrapper[5116]: Dec 12 16:16:20 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 12 16:16:20 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:20 crc kubenswrapper[5116]: --disable-webhook \ Dec 12 16:16:20 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 12 16:16:20 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:20 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:20 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.393429 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.404124 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.413282 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.414829 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.414880 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.414893 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.414913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.414928 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.422875 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.441880 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.454932 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.465959 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.480448 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.492579 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.503565 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.514215 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.516611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.516677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.516688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.516723 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.516735 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.525257 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.535740 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.546190 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.562179 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.572918 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.590971 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.611431 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.620383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.620439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.620455 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.620476 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.620488 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.626434 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.640857 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.653995 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.664090 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.674393 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.685697 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.694526 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.702626 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.710764 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.720324 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.723510 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.723543 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.723555 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.723570 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.723580 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.728452 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.736350 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.741138 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.741300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.741669 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.741761 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.741790 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.741850 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.741811355 +0000 UTC m=+77.206023111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742060 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.741819 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742094 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742142 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742065 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.742014831 +0000 UTC m=+77.206226637 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.742233 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742268 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.742237347 +0000 UTC m=+77.206449143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742421 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.74237291 +0000 UTC m=+77.206584716 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742426 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742489 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742519 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.742602 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.742575495 +0000 UTC m=+77.206787281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.752848 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.764259 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.773515 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.789504 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.801477 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.811730 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.825825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.825935 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.825954 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.825984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.826000 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.830002 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.843420 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.843616 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: E1212 16:16:20.843728 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:22.843705513 +0000 UTC m=+77.307917269 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.928341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.928388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.928402 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.928418 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:20 crc kubenswrapper[5116]: I1212 16:16:20.928476 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:20Z","lastTransitionTime":"2025-12-12T16:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.030685 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.031395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.031431 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.031455 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.031470 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.044059 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:21 crc kubenswrapper[5116]: E1212 16:16:21.044279 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.044298 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:21 crc kubenswrapper[5116]: E1212 16:16:21.044528 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.044641 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:21 crc kubenswrapper[5116]: E1212 16:16:21.044752 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.044798 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:21 crc kubenswrapper[5116]: E1212 16:16:21.044866 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.134361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.134426 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.134436 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.134452 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.134466 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.236322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.236386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.236401 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.236421 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.236435 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.339566 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.339641 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.339655 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.339676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.339693 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.442083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.442161 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.442178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.442201 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.442217 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.544310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.544365 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.544377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.544392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.544403 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.646772 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.646823 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.646834 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.646848 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.646860 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.749806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.749883 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.749894 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.749931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.749941 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.853234 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.853316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.853331 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.853352 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.853367 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.956518 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.956622 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.956643 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.956678 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5116]: I1212 16:16:21.956699 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.059193 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.059269 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.059288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.059316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.059337 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.174387 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.174446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.174459 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.174479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.174496 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.277142 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.277200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.277214 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.277233 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.277244 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.379763 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.379826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.379839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.379860 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.379873 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.481904 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.481950 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.481959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.481976 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.481986 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.584835 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.584920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.584936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.584959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.584976 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.687710 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.687798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.687818 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.687843 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.687861 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.772543 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.772835 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.772781131 +0000 UTC m=+81.236992917 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.773012 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.773081 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.773206 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.773265 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773287 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773439 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773486 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773511 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773548 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773445 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.773405268 +0000 UTC m=+81.237617064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773556 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773679 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.773610873 +0000 UTC m=+81.237822629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773707 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.773697836 +0000 UTC m=+81.237909592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773706 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773744 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.773842 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.773824629 +0000 UTC m=+81.238036425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.790211 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.790274 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.790288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.790308 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.790326 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.876990 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.877205 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: E1212 16:16:22.877312 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.877289459 +0000 UTC m=+81.341501335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.893084 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.893200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.893214 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.893239 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.893255 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.996446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.996538 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.996559 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.996586 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5116]: I1212 16:16:22.996612 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.044591 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.044671 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:23 crc kubenswrapper[5116]: E1212 16:16:23.044836 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:23 crc kubenswrapper[5116]: E1212 16:16:23.045009 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.044591 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:23 crc kubenswrapper[5116]: E1212 16:16:23.045211 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.045289 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:23 crc kubenswrapper[5116]: E1212 16:16:23.045429 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.099619 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.099726 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.099753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.099790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.099816 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.202024 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.202079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.202089 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.202122 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.202135 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.304988 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.305058 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.305075 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.305094 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.305130 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.406958 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.407029 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.407044 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.407063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.407075 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.509554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.509640 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.509664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.509693 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.509718 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.612516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.612588 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.612602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.612632 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.612647 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.715360 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.715414 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.715425 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.715441 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.715454 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.818695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.818784 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.818804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.818833 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.818854 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.921232 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.921291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.921304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.921327 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5116]: I1212 16:16:23.921342 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.024355 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.024430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.024449 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.024479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.024502 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.127362 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.127447 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.127467 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.127495 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.127520 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.230864 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.230950 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.230962 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.230977 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.230990 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.333308 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.333366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.333375 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.333388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.333397 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.435549 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.435692 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.435728 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.435762 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.435788 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.538048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.538100 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.538126 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.538139 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.538150 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.640277 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.640349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.640359 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.640377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.640389 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.743802 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.743887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.743911 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.743941 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.743962 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.847368 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.847415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.847424 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.847439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.847470 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.950776 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.950878 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.950897 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.950929 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5116]: I1212 16:16:24.950953 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.044361 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.044477 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.044396 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.044504 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:25 crc kubenswrapper[5116]: E1212 16:16:25.044733 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:25 crc kubenswrapper[5116]: E1212 16:16:25.044896 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:25 crc kubenswrapper[5116]: E1212 16:16:25.045030 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:25 crc kubenswrapper[5116]: E1212 16:16:25.045196 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.053742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.053805 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.053825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.053854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.053932 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.157057 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.157137 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.157151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.157172 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.157186 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.260627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.260729 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.260776 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.260809 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.260832 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.364866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.364928 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.364946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.364970 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.364988 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.468356 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.468423 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.468436 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.468454 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.468467 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.571736 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.571822 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.571840 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.571869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.571892 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.674431 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.674524 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.674551 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.674584 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.674607 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.778646 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.778753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.778781 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.778819 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.778847 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.882088 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.882172 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.882187 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.882207 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.882222 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.984979 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.985063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.985092 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.985158 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5116]: I1212 16:16:25.985189 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.057827 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.070689 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.078978 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.087859 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.087930 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.087948 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.087975 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.087992 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.093273 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.114090 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.135849 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.153164 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.167502 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.191287 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.191376 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.191400 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.191430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.191451 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.194687 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.207912 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.221129 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.233392 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.244072 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.253180 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.272194 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.294424 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.294494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.294509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.294527 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.294540 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.295937 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.312157 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.324817 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.339130 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.396838 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.396889 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.396899 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.396915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.396924 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.499331 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.499380 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.499389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.499407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.499418 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.602328 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.602401 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.602417 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.602438 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.602458 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.705324 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.705423 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.705446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.705477 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.705496 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.807825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.807873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.807886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.807902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.807929 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.826786 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.826857 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.826838102 +0000 UTC m=+89.291049858 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.826926 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.826946 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.826988 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.827006 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827100 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827129 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827133 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827160 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827166 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827162 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827198 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.827191151 +0000 UTC m=+89.291402907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827244 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.827225002 +0000 UTC m=+89.291436768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827174 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827300 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.827288323 +0000 UTC m=+89.291500099 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827139 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.827349 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.827338655 +0000 UTC m=+89.291550421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.910886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.910955 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.910966 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.910992 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.911003 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5116]: I1212 16:16:26.928167 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.928325 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:26 crc kubenswrapper[5116]: E1212 16:16:26.928411 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:34.928393539 +0000 UTC m=+89.392605295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.013509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.013557 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.013570 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.013588 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.013601 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.044883 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.044953 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.044889 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:27 crc kubenswrapper[5116]: E1212 16:16:27.045035 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.045149 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:27 crc kubenswrapper[5116]: E1212 16:16:27.045219 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:27 crc kubenswrapper[5116]: E1212 16:16:27.045145 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:27 crc kubenswrapper[5116]: E1212 16:16:27.045282 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.116624 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.116675 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.116685 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.116702 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.116714 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.218585 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.218631 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.218642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.218657 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.218668 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.321291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.321398 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.321425 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.321462 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.321489 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.423536 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.423604 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.423614 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.423652 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.423665 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.526101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.526376 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.526450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.526517 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.526577 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.629421 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.630492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.630534 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.630563 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.630582 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.733239 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.733318 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.733332 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.733353 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.733367 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.837095 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.837173 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.837187 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.837208 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.837223 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.940199 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.940315 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.940340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.940375 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.940405 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5116]: I1212 16:16:27.970355 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.043806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.043918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.043944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.043974 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.044001 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.147493 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.147600 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.147648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.147679 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.147698 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.250365 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.250458 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.250473 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.250497 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.250514 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.283589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.283648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.283662 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.283681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.283695 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.300202 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.304905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.304980 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.304990 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.305008 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.305018 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.317416 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.321494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.321520 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.321528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.321540 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.321549 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.331660 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.335143 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.335180 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.335190 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.335207 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.335218 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.345317 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.348835 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.348895 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.348912 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.348931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.348944 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.358789 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:28 crc kubenswrapper[5116]: E1212 16:16:28.358906 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.360122 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.360151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.360164 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.360181 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.360192 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.462906 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.462996 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.463023 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.463063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.463095 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.565887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.565947 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.565957 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.565976 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.565987 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.668013 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.668054 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.668063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.668077 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.668086 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.770514 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.770578 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.770595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.770618 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.770634 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.873828 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.873924 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.873955 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.873990 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.874016 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.977291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.977354 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.977372 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.977394 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5116]: I1212 16:16:28.977406 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.043932 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.043932 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:29 crc kubenswrapper[5116]: E1212 16:16:29.044070 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.043948 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:29 crc kubenswrapper[5116]: E1212 16:16:29.044179 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.044203 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:29 crc kubenswrapper[5116]: E1212 16:16:29.044399 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:29 crc kubenswrapper[5116]: E1212 16:16:29.044780 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.079938 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.079992 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.080002 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.080017 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.080046 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.182971 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.183022 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.183033 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.183048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.183060 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.286372 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.286467 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.286494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.286528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.286554 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.389656 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.389729 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.389744 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.389767 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.389785 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.492785 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.492869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.492892 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.492920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.492938 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.595483 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.595562 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.595592 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.595623 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.595648 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.699341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.699418 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.699430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.699454 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.699467 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.803011 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.803077 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.803090 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.803123 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.803136 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.905735 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.905862 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.905891 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.905927 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:29 crc kubenswrapper[5116]: I1212 16:16:29.905952 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:29Z","lastTransitionTime":"2025-12-12T16:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.009301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.009388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.009408 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.009435 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.009455 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.112493 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.112548 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.112558 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.112577 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.112594 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.216642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.216726 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.216749 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.216782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.216808 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.320472 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.320552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.320572 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.320600 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.320620 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.423389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.423444 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.423459 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.423480 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.423494 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.527168 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.527256 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.527285 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.527319 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.527362 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.630676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.630764 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.630786 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.630816 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.630837 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.734232 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.734528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.734552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.734579 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.734598 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.837959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.838018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.838030 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.838049 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.838063 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.940921 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.940995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.941019 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.941056 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:30 crc kubenswrapper[5116]: I1212 16:16:30.941080 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:30Z","lastTransitionTime":"2025-12-12T16:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044229 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044497 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044569 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044607 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044635 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.044794 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.045237 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.045618 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.045713 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.045887 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.045885 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.046056 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.047844 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:31 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:31 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:31 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:31 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:31 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:31 crc kubenswrapper[5116]: fi Dec 12 16:16:31 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 12 16:16:31 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 12 16:16:31 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 12 16:16:31 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 12 16:16:31 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 12 16:16:31 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 12 16:16:31 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:31 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 12 16:16:31 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 12 16:16:31 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 12 16:16:31 crc kubenswrapper[5116]: ${ho_enable} \ Dec 12 16:16:31 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:31 crc kubenswrapper[5116]: --disable-approver \ Dec 12 16:16:31 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 12 16:16:31 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 12 16:16:31 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 12 16:16:31 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:31 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:31 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.048073 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-84wvk_openshift-multus(814309ea-c9dc-4630-acd2-43b66b028bd5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.049278 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-84wvk" podUID="814309ea-c9dc-4630-acd2-43b66b028bd5" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.050584 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:31 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:31 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:31 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:31 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:31 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:31 crc kubenswrapper[5116]: fi Dec 12 16:16:31 crc kubenswrapper[5116]: Dec 12 16:16:31 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 12 16:16:31 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:31 crc kubenswrapper[5116]: --disable-webhook \ Dec 12 16:16:31 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 12 16:16:31 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:31 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:31 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:31 crc kubenswrapper[5116]: E1212 16:16:31.051883 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.147584 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.147644 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.147657 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.147675 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.147687 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.250518 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.250602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.250624 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.250663 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.250707 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.353975 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.354052 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.354067 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.354090 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.354133 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.401357 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.417005 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.438173 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.456405 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.456479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.456490 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.456506 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.456518 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.460067 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.476494 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.489912 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.512220 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.526020 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.537441 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.549422 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559050 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559306 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559351 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.559400 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.568792 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.586453 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.607274 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.624792 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.642372 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.659922 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.661395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.661483 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.661508 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.661542 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.661567 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.676247 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.691027 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.700736 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.764816 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.764909 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.764931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.764957 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.764976 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.867967 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.868028 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.868041 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.868060 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.868073 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.970440 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.970516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.970545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.970576 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:31 crc kubenswrapper[5116]: I1212 16:16:31.970604 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:31Z","lastTransitionTime":"2025-12-12T16:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.046063 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:32 crc kubenswrapper[5116]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:32 crc kubenswrapper[5116]: set -euo pipefail Dec 12 16:16:32 crc kubenswrapper[5116]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 12 16:16:32 crc kubenswrapper[5116]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 12 16:16:32 crc kubenswrapper[5116]: # As the secret mount is optional we must wait for the files to be present. Dec 12 16:16:32 crc kubenswrapper[5116]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 12 16:16:32 crc kubenswrapper[5116]: TS=$(date +%s) Dec 12 16:16:32 crc kubenswrapper[5116]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 12 16:16:32 crc kubenswrapper[5116]: HAS_LOGGED_INFO=0 Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: log_missing_certs(){ Dec 12 16:16:32 crc kubenswrapper[5116]: CUR_TS=$(date +%s) Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 12 16:16:32 crc kubenswrapper[5116]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 12 16:16:32 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 12 16:16:32 crc kubenswrapper[5116]: HAS_LOGGED_INFO=1 Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: } Dec 12 16:16:32 crc kubenswrapper[5116]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 12 16:16:32 crc kubenswrapper[5116]: log_missing_certs Dec 12 16:16:32 crc kubenswrapper[5116]: sleep 5 Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 12 16:16:32 crc kubenswrapper[5116]: exec /usr/bin/kube-rbac-proxy \ Dec 12 16:16:32 crc kubenswrapper[5116]: --logtostderr \ Dec 12 16:16:32 crc kubenswrapper[5116]: --secure-listen-address=:9108 \ Dec 12 16:16:32 crc kubenswrapper[5116]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 12 16:16:32 crc kubenswrapper[5116]: --upstream=http://127.0.0.1:29108/ \ Dec 12 16:16:32 crc kubenswrapper[5116]: --tls-private-key-file=${TLS_PK} \ Dec 12 16:16:32 crc kubenswrapper[5116]: --tls-cert-file=${TLS_CERT} Dec 12 16:16:32 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:32 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.046133 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:32 crc kubenswrapper[5116]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:32 crc kubenswrapper[5116]: set -uo pipefail Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 12 16:16:32 crc kubenswrapper[5116]: HOSTS_FILE="/etc/hosts" Dec 12 16:16:32 crc kubenswrapper[5116]: TEMP_FILE="/tmp/hosts.tmp" Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # Make a temporary file with the old hosts file's attributes. Dec 12 16:16:32 crc kubenswrapper[5116]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 12 16:16:32 crc kubenswrapper[5116]: echo "Failed to preserve hosts file. Exiting." Dec 12 16:16:32 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: while true; do Dec 12 16:16:32 crc kubenswrapper[5116]: declare -A svc_ips Dec 12 16:16:32 crc kubenswrapper[5116]: for svc in "${services[@]}"; do Dec 12 16:16:32 crc kubenswrapper[5116]: # Fetch service IP from cluster dns if present. We make several tries Dec 12 16:16:32 crc kubenswrapper[5116]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 12 16:16:32 crc kubenswrapper[5116]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 12 16:16:32 crc kubenswrapper[5116]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 12 16:16:32 crc kubenswrapper[5116]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:32 crc kubenswrapper[5116]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:32 crc kubenswrapper[5116]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:32 crc kubenswrapper[5116]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 12 16:16:32 crc kubenswrapper[5116]: for i in ${!cmds[*]} Dec 12 16:16:32 crc kubenswrapper[5116]: do Dec 12 16:16:32 crc kubenswrapper[5116]: ips=($(eval "${cmds[i]}")) Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: svc_ips["${svc}"]="${ips[@]}" Dec 12 16:16:32 crc kubenswrapper[5116]: break Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # Update /etc/hosts only if we get valid service IPs Dec 12 16:16:32 crc kubenswrapper[5116]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 12 16:16:32 crc kubenswrapper[5116]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 12 16:16:32 crc kubenswrapper[5116]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 12 16:16:32 crc kubenswrapper[5116]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 12 16:16:32 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:32 crc kubenswrapper[5116]: continue Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # Append resolver entries for services Dec 12 16:16:32 crc kubenswrapper[5116]: rc=0 Dec 12 16:16:32 crc kubenswrapper[5116]: for svc in "${!svc_ips[@]}"; do Dec 12 16:16:32 crc kubenswrapper[5116]: for ip in ${svc_ips[${svc}]}; do Dec 12 16:16:32 crc kubenswrapper[5116]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ $rc -ne 0 ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:32 crc kubenswrapper[5116]: continue Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 12 16:16:32 crc kubenswrapper[5116]: # Replace /etc/hosts with our modified version if needed Dec 12 16:16:32 crc kubenswrapper[5116]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 12 16:16:32 crc kubenswrapper[5116]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:32 crc kubenswrapper[5116]: unset svc_ips Dec 12 16:16:32 crc kubenswrapper[5116]: done Dec 12 16:16:32 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5lvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xxzkd_openshift-dns(e0adf1a1-3140-410d-a33a-79b360ff4362): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:32 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.046481 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:32 crc kubenswrapper[5116]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:32 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: source /etc/kubernetes/apiserver-url.env Dec 12 16:16:32 crc kubenswrapper[5116]: else Dec 12 16:16:32 crc kubenswrapper[5116]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 12 16:16:32 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 12 16:16:32 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:32 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.047330 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xxzkd" podUID="e0adf1a1-3140-410d-a33a-79b360ff4362" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.047597 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.049176 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:32 crc kubenswrapper[5116]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:32 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:32 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # This is needed so that converting clusters from GA to TP Dec 12 16:16:32 crc kubenswrapper[5116]: # will rollout control plane pods as well Dec 12 16:16:32 crc kubenswrapper[5116]: network_segmentation_enabled_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: multi_network_enabled_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "true" != "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: route_advertisements_enable_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # Enable multi-network policy if configured (control-plane always full mode) Dec 12 16:16:32 crc kubenswrapper[5116]: multi_network_policy_enabled_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: # Enable admin network policy if configured (control-plane always full mode) Dec 12 16:16:32 crc kubenswrapper[5116]: admin_network_policy_enabled_flag= Dec 12 16:16:32 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 12 16:16:32 crc kubenswrapper[5116]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: if [ "shared" == "shared" ]; then Dec 12 16:16:32 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode shared" Dec 12 16:16:32 crc kubenswrapper[5116]: elif [ "shared" == "local" ]; then Dec 12 16:16:32 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode local" Dec 12 16:16:32 crc kubenswrapper[5116]: else Dec 12 16:16:32 crc kubenswrapper[5116]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 12 16:16:32 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:32 crc kubenswrapper[5116]: fi Dec 12 16:16:32 crc kubenswrapper[5116]: Dec 12 16:16:32 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 12 16:16:32 crc kubenswrapper[5116]: exec /usr/bin/ovnkube \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:32 crc kubenswrapper[5116]: --init-cluster-manager "${K8S_NODE}" \ Dec 12 16:16:32 crc kubenswrapper[5116]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 12 16:16:32 crc kubenswrapper[5116]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 12 16:16:32 crc kubenswrapper[5116]: --metrics-bind-address "127.0.0.1:29108" \ Dec 12 16:16:32 crc kubenswrapper[5116]: --metrics-enable-pprof \ Dec 12 16:16:32 crc kubenswrapper[5116]: --metrics-enable-config-duration \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${ovn_v4_join_subnet_opt} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${ovn_v6_join_subnet_opt} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${dns_name_resolver_enabled_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${persistent_ips_enabled_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${multi_network_enabled_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${network_segmentation_enabled_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${gateway_mode_flags} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${route_advertisements_enable_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${preconfigured_udn_addresses_enable_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-egress-ip=true \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-egress-firewall=true \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-egress-qos=true \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-egress-service=true \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-multicast \ Dec 12 16:16:32 crc kubenswrapper[5116]: --enable-multi-external-gateway=true \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${multi_network_policy_enabled_flag} \ Dec 12 16:16:32 crc kubenswrapper[5116]: ${admin_network_policy_enabled_flag} Dec 12 16:16:32 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-str5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-fl6jw_openshift-ovn-kubernetes(3252cf25-4bc0-4262-923c-20bb5a19f1cb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:32 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:32 crc kubenswrapper[5116]: E1212 16:16:32.050383 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.072568 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.072643 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.072660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.072684 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.072700 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.175852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.175954 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.176026 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.176062 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.176085 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.278015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.278079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.278098 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.278148 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.278162 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.381169 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.381284 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.381305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.381333 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.381353 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.484026 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.484068 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.484078 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.484091 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.484100 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.587214 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.587318 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.587339 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.587367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.587387 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.689866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.689923 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.690130 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.690158 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.690173 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.793186 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.793266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.793292 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.793325 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.793345 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.896142 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.896203 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.896215 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.896235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.896248 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.999040 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.999121 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.999135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.999154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:32 crc kubenswrapper[5116]: I1212 16:16:32.999166 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:32Z","lastTransitionTime":"2025-12-12T16:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.044295 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.044350 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:33 crc kubenswrapper[5116]: E1212 16:16:33.044482 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.044499 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:33 crc kubenswrapper[5116]: E1212 16:16:33.044617 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:33 crc kubenswrapper[5116]: E1212 16:16:33.044727 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.044759 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:33 crc kubenswrapper[5116]: E1212 16:16:33.044819 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.101546 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.101600 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.101615 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.101634 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.101646 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.203561 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.203617 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.203628 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.203658 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.203669 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.306728 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.306788 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.306801 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.306820 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.306837 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.409825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.409906 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.409928 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.409954 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.409973 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.513435 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.513511 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.513533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.513559 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.513581 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.616517 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.616603 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.616630 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.616705 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.616735 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.718756 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.718815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.718827 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.718850 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.718865 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.821766 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.821848 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.821863 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.821882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.821901 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.924291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.924349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.924362 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.924381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:33 crc kubenswrapper[5116]: I1212 16:16:33.924393 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:33Z","lastTransitionTime":"2025-12-12T16:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.027019 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.027198 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.027224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.027255 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.027276 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.048409 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:34 crc kubenswrapper[5116]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 12 16:16:34 crc kubenswrapper[5116]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 12 16:16:34 crc kubenswrapper[5116]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlv5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bphkq_openshift-multus(0e71d710-0829-4655-b88f-9318b9776228): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:34 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.048834 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.049635 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bphkq" podUID="0e71d710-0829-4655-b88f-9318b9776228" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.052360 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bb58t_openshift-machine-config-operator(8fedd19a-ed2a-4e65-a3ad-e104203261fe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.053652 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.130292 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.130346 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.130372 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.130393 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.130406 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.233392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.233492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.233518 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.233547 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.233567 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.335995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.336061 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.336079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.336151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.336194 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.438387 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.438474 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.438495 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.438522 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.438541 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.540799 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.540862 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.540878 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.540901 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.540919 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.642769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.642843 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.642863 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.642886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.642903 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.745770 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.745830 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.745849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.745872 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.745894 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.828810 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829184 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.829082168 +0000 UTC m=+105.293293954 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.829506 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.829562 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.829664 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829780 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829820 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829829 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829850 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.829902 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829985 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.829958062 +0000 UTC m=+105.294169898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.829858 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830073 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.830054275 +0000 UTC m=+105.294266151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830097 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830145 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830167 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830244 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.830216489 +0000 UTC m=+105.294428285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.830297 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.830284401 +0000 UTC m=+105.294496187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.848187 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.848269 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.848288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.848315 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.848334 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.931211 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.931542 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: E1212 16:16:34.931725 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.931687466 +0000 UTC m=+105.395899432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.951551 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.951637 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.951660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.951687 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:34 crc kubenswrapper[5116]: I1212 16:16:34.951716 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:34Z","lastTransitionTime":"2025-12-12T16:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.045009 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.045054 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.045382 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.045762 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.045948 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.046245 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.046293 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.046469 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.049131 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:35 crc kubenswrapper[5116]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 12 16:16:35 crc kubenswrapper[5116]: while [ true ]; Dec 12 16:16:35 crc kubenswrapper[5116]: do Dec 12 16:16:35 crc kubenswrapper[5116]: for f in $(ls /tmp/serviceca); do Dec 12 16:16:35 crc kubenswrapper[5116]: echo $f Dec 12 16:16:35 crc kubenswrapper[5116]: ca_file_path="/tmp/serviceca/${f}" Dec 12 16:16:35 crc kubenswrapper[5116]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 12 16:16:35 crc kubenswrapper[5116]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 12 16:16:35 crc kubenswrapper[5116]: if [ -e "${reg_dir_path}" ]; then Dec 12 16:16:35 crc kubenswrapper[5116]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:35 crc kubenswrapper[5116]: else Dec 12 16:16:35 crc kubenswrapper[5116]: mkdir $reg_dir_path Dec 12 16:16:35 crc kubenswrapper[5116]: cp $ca_file_path $reg_dir_path/ca.crt Dec 12 16:16:35 crc kubenswrapper[5116]: fi Dec 12 16:16:35 crc kubenswrapper[5116]: done Dec 12 16:16:35 crc kubenswrapper[5116]: for d in $(ls /etc/docker/certs.d); do Dec 12 16:16:35 crc kubenswrapper[5116]: echo $d Dec 12 16:16:35 crc kubenswrapper[5116]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 12 16:16:35 crc kubenswrapper[5116]: reg_conf_path="/tmp/serviceca/${dp}" Dec 12 16:16:35 crc kubenswrapper[5116]: if [ ! -e "${reg_conf_path}" ]; then Dec 12 16:16:35 crc kubenswrapper[5116]: rm -rf /etc/docker/certs.d/$d Dec 12 16:16:35 crc kubenswrapper[5116]: fi Dec 12 16:16:35 crc kubenswrapper[5116]: done Dec 12 16:16:35 crc kubenswrapper[5116]: sleep 60 & wait ${!} Dec 12 16:16:35 crc kubenswrapper[5116]: done Dec 12 16:16:35 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqphd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plb9v_openshift-image-registry(af830c5e-c623-45f9-978d-bab9a3fdbd6c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:35 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.049565 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.050644 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plb9v" podUID="af830c5e-c623-45f9-978d-bab9a3fdbd6c" Dec 12 16:16:35 crc kubenswrapper[5116]: E1212 16:16:35.050698 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.054845 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.054940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.054966 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.054995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.055016 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.158340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.158426 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.158453 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.158489 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.158514 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.261617 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.261702 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.261724 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.261750 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.261771 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.369456 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.369516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.369533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.369551 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.369565 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.472804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.472891 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.472905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.472926 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.472940 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.576311 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.576386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.576404 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.576430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.576448 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.679392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.679451 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.679467 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.679485 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.679498 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.782386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.782448 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.782461 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.782481 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.782492 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.886040 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.886096 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.886120 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.886135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.886145 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.989548 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.989640 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.989661 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.989690 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:35 crc kubenswrapper[5116]: I1212 16:16:35.989712 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:35Z","lastTransitionTime":"2025-12-12T16:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: E1212 16:16:36.049336 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:36 crc kubenswrapper[5116]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 12 16:16:36 crc kubenswrapper[5116]: apiVersion: v1 Dec 12 16:16:36 crc kubenswrapper[5116]: clusters: Dec 12 16:16:36 crc kubenswrapper[5116]: - cluster: Dec 12 16:16:36 crc kubenswrapper[5116]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 12 16:16:36 crc kubenswrapper[5116]: server: https://api-int.crc.testing:6443 Dec 12 16:16:36 crc kubenswrapper[5116]: name: default-cluster Dec 12 16:16:36 crc kubenswrapper[5116]: contexts: Dec 12 16:16:36 crc kubenswrapper[5116]: - context: Dec 12 16:16:36 crc kubenswrapper[5116]: cluster: default-cluster Dec 12 16:16:36 crc kubenswrapper[5116]: namespace: default Dec 12 16:16:36 crc kubenswrapper[5116]: user: default-auth Dec 12 16:16:36 crc kubenswrapper[5116]: name: default-context Dec 12 16:16:36 crc kubenswrapper[5116]: current-context: default-context Dec 12 16:16:36 crc kubenswrapper[5116]: kind: Config Dec 12 16:16:36 crc kubenswrapper[5116]: preferences: {} Dec 12 16:16:36 crc kubenswrapper[5116]: users: Dec 12 16:16:36 crc kubenswrapper[5116]: - name: default-auth Dec 12 16:16:36 crc kubenswrapper[5116]: user: Dec 12 16:16:36 crc kubenswrapper[5116]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:36 crc kubenswrapper[5116]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 12 16:16:36 crc kubenswrapper[5116]: EOF Dec 12 16:16:36 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fg2lh_openshift-ovn-kubernetes(789dbc62-9a37-4521-89a5-476e80e7beb6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:36 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:36 crc kubenswrapper[5116]: E1212 16:16:36.050825 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.072752 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.087849 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.093164 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.093219 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.093232 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.093251 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.093263 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.104896 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.120961 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.134012 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.147220 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.159473 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.181389 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.196207 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.196265 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.196275 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.196293 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.196310 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.198466 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.211860 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.225731 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.238379 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.245855 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.269434 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.285146 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.296816 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.298269 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.298376 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.298399 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.298424 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.298452 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.312045 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.325788 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.337818 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.401726 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.401792 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.401806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.401825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.401840 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.504664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.504728 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.504741 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.504760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.504774 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.607605 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.607652 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.607666 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.607687 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.607710 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.710904 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.710989 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.711008 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.711039 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.711066 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.814047 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.814148 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.814172 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.814213 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.814235 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.917451 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.917544 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.917571 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.917603 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:36 crc kubenswrapper[5116]: I1212 16:16:36.917631 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:36Z","lastTransitionTime":"2025-12-12T16:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.019861 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.019920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.019932 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.019951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.019963 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.044719 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.044796 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:37 crc kubenswrapper[5116]: E1212 16:16:37.044878 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.044720 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:37 crc kubenswrapper[5116]: E1212 16:16:37.044950 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.044996 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:37 crc kubenswrapper[5116]: E1212 16:16:37.045072 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:37 crc kubenswrapper[5116]: E1212 16:16:37.045145 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.122956 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.123035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.123056 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.123083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.123129 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.225953 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.226072 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.226098 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.226173 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.226203 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.329866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.329957 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.329983 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.330018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.330044 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.433914 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.434002 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.434018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.434039 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.434054 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.536992 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.537080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.537144 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.537186 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.537215 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.639978 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.640037 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.640046 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.640063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.640074 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.743248 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.743335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.743361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.743396 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.743430 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.846694 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.846775 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.846799 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.846826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.846845 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.949431 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.949510 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.949530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.949557 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:37 crc kubenswrapper[5116]: I1212 16:16:37.949574 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:37Z","lastTransitionTime":"2025-12-12T16:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.051596 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.051695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.051721 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.051753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.051778 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.155170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.155236 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.155283 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.155304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.155319 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.258577 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.258636 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.258649 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.258670 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.258683 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.360987 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.361034 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.361065 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.361082 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.361092 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.434852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.434903 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.434913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.434928 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.434939 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.446810 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.452592 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.452680 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.452707 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.452739 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.452765 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.469663 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.474932 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.474984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.474994 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.475010 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.475020 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.485465 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.489587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.489627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.489637 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.489652 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.489663 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.501413 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.505796 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.505863 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.505873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.505889 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.505905 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.518229 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:38 crc kubenswrapper[5116]: E1212 16:16:38.518427 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.520066 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.520154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.520170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.520191 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.520204 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.623301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.623377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.623392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.623417 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.623434 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.727017 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.727140 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.727163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.727199 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.727224 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.830391 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.830475 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.830495 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.830527 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.830549 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.934101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.934210 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.934224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.934244 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.934261 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:38Z","lastTransitionTime":"2025-12-12T16:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:38 crc kubenswrapper[5116]: I1212 16:16:38.961596 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.036560 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.037061 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.037203 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.037317 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.037391 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.043990 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.044016 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.043987 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:39 crc kubenswrapper[5116]: E1212 16:16:39.044152 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:39 crc kubenswrapper[5116]: E1212 16:16:39.044207 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:39 crc kubenswrapper[5116]: E1212 16:16:39.044291 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.044329 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:39 crc kubenswrapper[5116]: E1212 16:16:39.044383 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.140717 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.140767 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.140777 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.140794 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.140806 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.243295 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.243332 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.243341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.243355 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.243364 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.346186 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.346268 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.346289 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.346317 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.346337 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.447920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.447964 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.447975 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.447989 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.448000 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.549885 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.549941 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.549951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.549966 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.549976 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.652595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.652673 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.652689 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.652713 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.652736 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.754427 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.754486 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.754500 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.754523 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.754537 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.857154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.857220 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.857232 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.857247 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.857257 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.959699 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.959770 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.959782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.959801 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:39 crc kubenswrapper[5116]: I1212 16:16:39.959814 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:39Z","lastTransitionTime":"2025-12-12T16:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.062328 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.062388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.062398 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.062413 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.062425 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.164917 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.164958 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.164969 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.164985 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.164994 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.266662 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.266700 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.266709 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.266724 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.266736 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.369000 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.369043 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.369062 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.369080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.369094 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.471745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.471815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.471825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.471862 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.471873 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.575059 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.575157 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.575174 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.575196 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.575211 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.678048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.678136 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.678154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.678173 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.678187 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.781047 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.781099 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.781148 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.781168 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.781179 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.883681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.883760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.883770 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.883790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.883804 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.986791 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.986905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.986925 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.986954 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:40 crc kubenswrapper[5116]: I1212 16:16:40.986978 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:40Z","lastTransitionTime":"2025-12-12T16:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.044468 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.044519 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.044468 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.044679 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:41 crc kubenswrapper[5116]: E1212 16:16:41.044703 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:41 crc kubenswrapper[5116]: E1212 16:16:41.044799 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:41 crc kubenswrapper[5116]: E1212 16:16:41.044898 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:41 crc kubenswrapper[5116]: E1212 16:16:41.044989 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.090210 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.090351 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.090381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.090423 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.090450 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.193662 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.193737 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.193757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.193782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.193800 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.295967 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.296028 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.296040 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.296056 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.296066 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.397795 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.397853 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.397867 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.397889 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.397903 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.500751 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.501469 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.501514 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.501540 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.501553 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.604415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.604506 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.604527 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.604554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.604573 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.707948 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.708025 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.708050 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.708081 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.708102 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.811183 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.811258 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.811274 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.811294 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.811308 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.914798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.914860 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.914873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.914889 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:41 crc kubenswrapper[5116]: I1212 16:16:41.914900 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:41Z","lastTransitionTime":"2025-12-12T16:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.018492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.018589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.018619 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.018655 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.018681 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.121874 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.121971 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.121987 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.122032 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.122048 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.225266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.225363 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.225387 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.225415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.225434 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.328555 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.328620 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.328637 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.328660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.328678 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.431349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.431429 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.431448 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.431474 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.431494 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.535208 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.535317 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.535345 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.535383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.535403 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.638210 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.638282 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.638300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.638322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.638337 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.740918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.740983 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.740995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.741014 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.741028 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.843811 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.843866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.843882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.843906 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.843919 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.946366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.946451 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.946472 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.946502 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:42 crc kubenswrapper[5116]: I1212 16:16:42.946523 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:42Z","lastTransitionTime":"2025-12-12T16:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.044948 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.045310 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.045757 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.045796 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.045940 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.046093 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.046166 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.046242 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.048919 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.048918 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-84wvk_openshift-multus(814309ea-c9dc-4630-acd2-43b66b028bd5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.049134 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.049163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.049194 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.049235 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:43 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:43 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:43 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:43 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:43 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 12 16:16:43 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 12 16:16:43 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 12 16:16:43 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 12 16:16:43 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 12 16:16:43 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 12 16:16:43 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:43 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 12 16:16:43 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 12 16:16:43 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 12 16:16:43 crc kubenswrapper[5116]: ${ho_enable} \ Dec 12 16:16:43 crc kubenswrapper[5116]: --enable-interconnect \ Dec 12 16:16:43 crc kubenswrapper[5116]: --disable-approver \ Dec 12 16:16:43 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 12 16:16:43 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 12 16:16:43 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 12 16:16:43 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:43 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:43 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.049351 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:43 crc kubenswrapper[5116]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 12 16:16:43 crc kubenswrapper[5116]: set -uo pipefail Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 12 16:16:43 crc kubenswrapper[5116]: HOSTS_FILE="/etc/hosts" Dec 12 16:16:43 crc kubenswrapper[5116]: TEMP_FILE="/tmp/hosts.tmp" Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: # Make a temporary file with the old hosts file's attributes. Dec 12 16:16:43 crc kubenswrapper[5116]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 12 16:16:43 crc kubenswrapper[5116]: echo "Failed to preserve hosts file. Exiting." Dec 12 16:16:43 crc kubenswrapper[5116]: exit 1 Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: while true; do Dec 12 16:16:43 crc kubenswrapper[5116]: declare -A svc_ips Dec 12 16:16:43 crc kubenswrapper[5116]: for svc in "${services[@]}"; do Dec 12 16:16:43 crc kubenswrapper[5116]: # Fetch service IP from cluster dns if present. We make several tries Dec 12 16:16:43 crc kubenswrapper[5116]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 12 16:16:43 crc kubenswrapper[5116]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 12 16:16:43 crc kubenswrapper[5116]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 12 16:16:43 crc kubenswrapper[5116]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:43 crc kubenswrapper[5116]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:43 crc kubenswrapper[5116]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 12 16:16:43 crc kubenswrapper[5116]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 12 16:16:43 crc kubenswrapper[5116]: for i in ${!cmds[*]} Dec 12 16:16:43 crc kubenswrapper[5116]: do Dec 12 16:16:43 crc kubenswrapper[5116]: ips=($(eval "${cmds[i]}")) Dec 12 16:16:43 crc kubenswrapper[5116]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 12 16:16:43 crc kubenswrapper[5116]: svc_ips["${svc}"]="${ips[@]}" Dec 12 16:16:43 crc kubenswrapper[5116]: break Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: done Dec 12 16:16:43 crc kubenswrapper[5116]: done Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: # Update /etc/hosts only if we get valid service IPs Dec 12 16:16:43 crc kubenswrapper[5116]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 12 16:16:43 crc kubenswrapper[5116]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 12 16:16:43 crc kubenswrapper[5116]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 12 16:16:43 crc kubenswrapper[5116]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 12 16:16:43 crc kubenswrapper[5116]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 12 16:16:43 crc kubenswrapper[5116]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 12 16:16:43 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:43 crc kubenswrapper[5116]: continue Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: # Append resolver entries for services Dec 12 16:16:43 crc kubenswrapper[5116]: rc=0 Dec 12 16:16:43 crc kubenswrapper[5116]: for svc in "${!svc_ips[@]}"; do Dec 12 16:16:43 crc kubenswrapper[5116]: for ip in ${svc_ips[${svc}]}; do Dec 12 16:16:43 crc kubenswrapper[5116]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 12 16:16:43 crc kubenswrapper[5116]: done Dec 12 16:16:43 crc kubenswrapper[5116]: done Dec 12 16:16:43 crc kubenswrapper[5116]: if [[ $rc -ne 0 ]]; then Dec 12 16:16:43 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:43 crc kubenswrapper[5116]: continue Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 12 16:16:43 crc kubenswrapper[5116]: # Replace /etc/hosts with our modified version if needed Dec 12 16:16:43 crc kubenswrapper[5116]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 12 16:16:43 crc kubenswrapper[5116]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: sleep 60 & wait Dec 12 16:16:43 crc kubenswrapper[5116]: unset svc_ips Dec 12 16:16:43 crc kubenswrapper[5116]: done Dec 12 16:16:43 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5lvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xxzkd_openshift-dns(e0adf1a1-3140-410d-a33a-79b360ff4362): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:43 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.049216 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.050276 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-84wvk" podUID="814309ea-c9dc-4630-acd2-43b66b028bd5" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.050555 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xxzkd" podUID="e0adf1a1-3140-410d-a33a-79b360ff4362" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.051813 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 12 16:16:43 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 12 16:16:43 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 12 16:16:43 crc kubenswrapper[5116]: set -o allexport Dec 12 16:16:43 crc kubenswrapper[5116]: source "/env/_master" Dec 12 16:16:43 crc kubenswrapper[5116]: set +o allexport Dec 12 16:16:43 crc kubenswrapper[5116]: fi Dec 12 16:16:43 crc kubenswrapper[5116]: Dec 12 16:16:43 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 12 16:16:43 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 12 16:16:43 crc kubenswrapper[5116]: --disable-webhook \ Dec 12 16:16:43 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 12 16:16:43 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 12 16:16:43 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 12 16:16:43 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 12 16:16:43 crc kubenswrapper[5116]: E1212 16:16:43.053135 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.152532 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.152602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.152618 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.152645 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.152666 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.255720 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.255786 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.255800 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.255822 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.255837 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.358127 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.358175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.358185 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.358202 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.358212 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.459793 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.459841 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.459854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.459869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.459883 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.562771 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.562849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.562861 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.562879 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.562892 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.665960 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.666015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.666024 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.666039 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.666048 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.768663 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.768717 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.768731 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.768748 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.768759 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.871489 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.871558 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.871578 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.871602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.871619 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.974261 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.974354 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.974375 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.974446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:43 crc kubenswrapper[5116]: I1212 16:16:43.974469 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:43Z","lastTransitionTime":"2025-12-12T16:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.076792 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.076876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.076903 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.076934 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.076958 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.179294 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.179344 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.179358 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.179377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.179389 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.282180 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.282310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.282326 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.282348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.282361 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.384927 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.385013 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.385040 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.385073 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.385099 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.487873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.487937 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.487951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.487971 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.487983 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.590994 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.591089 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.591169 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.591205 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.591233 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.693710 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.693768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.693783 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.693806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.693822 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.796230 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.796317 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.796336 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.796366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.796386 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.899235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.899278 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.899290 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.899306 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:44 crc kubenswrapper[5116]: I1212 16:16:44.899317 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:44Z","lastTransitionTime":"2025-12-12T16:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.002898 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.002958 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.002973 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.002991 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.003006 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.039099 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.045041 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:45 crc kubenswrapper[5116]: E1212 16:16:45.045864 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.045922 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:45 crc kubenswrapper[5116]: E1212 16:16:45.047905 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.047987 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.048223 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:45 crc kubenswrapper[5116]: E1212 16:16:45.048222 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:45 crc kubenswrapper[5116]: E1212 16:16:45.048373 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.106557 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.106648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.106667 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.106699 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.106719 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.209420 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.209474 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.209484 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.209501 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.209511 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.311626 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.311686 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.311701 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.311720 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.311733 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.414991 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.415070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.415093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.415151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.415172 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.465880 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerStarted","Data":"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.465968 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerStarted","Data":"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.519267 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.519340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.519354 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.519377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.519396 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.623194 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.623314 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.623346 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.623384 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.623412 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.727233 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.727302 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.727315 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.727335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.727351 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.829866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.829931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.829951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.829971 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.829985 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.932535 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.932597 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.932608 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.932633 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:45 crc kubenswrapper[5116]: I1212 16:16:45.932649 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:45Z","lastTransitionTime":"2025-12-12T16:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.035850 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.035921 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.035936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.035957 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.035974 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.059208 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.076065 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.103527 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.115357 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.129314 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.138319 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.138372 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.138388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.138409 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.138424 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.143693 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.160337 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.174592 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.189042 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.209154 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.221728 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.234378 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.240523 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.240582 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.240594 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.240612 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.240622 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.249633 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.259068 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.269385 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.289832 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.304992 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.316708 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.329036 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.343990 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.344047 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.344059 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.344080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.344095 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.447348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.447409 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.447420 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.447440 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.447454 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.472784 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.490585 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.506433 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.525081 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.537695 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551215 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551286 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551303 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551327 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551346 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.551630 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.575721 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.591820 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.603498 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.614275 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.625556 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.636995 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.651057 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.653944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.654255 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.654377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.654524 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.654661 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.660805 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.669619 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.682099 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.697018 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.710297 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.720036 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.740513 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.754166 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.757193 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.757263 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.757283 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.757311 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.757333 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.764485 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.773328 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.793135 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.810988 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.825422 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.838309 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.850988 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.859211 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.859275 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.859296 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.859322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.859342 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.866799 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.883462 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.894155 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.906000 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.917675 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.930845 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.942591 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.953555 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.962782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.962874 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.962906 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.962940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.962973 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:46Z","lastTransitionTime":"2025-12-12T16:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.978426 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5116]: I1212 16:16:46.992533 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.003397 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.044077 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:47 crc kubenswrapper[5116]: E1212 16:16:47.044256 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.044490 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:47 crc kubenswrapper[5116]: E1212 16:16:47.044551 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.045341 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:47 crc kubenswrapper[5116]: E1212 16:16:47.045564 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.045703 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:47 crc kubenswrapper[5116]: E1212 16:16:47.045806 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.066095 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.066182 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.066202 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.066226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.066243 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.172162 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.172582 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.172593 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.172611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.172622 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.275359 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.275450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.275479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.275515 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.275541 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.379872 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.379975 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.380012 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.380056 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.380085 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.480507 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bphkq" event={"ID":"0e71d710-0829-4655-b88f-9318b9776228","Type":"ContainerStarted","Data":"9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.482813 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.483158 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.483204 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.483304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.483394 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.497075 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.518539 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.531867 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.543204 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.571976 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.587494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.587553 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.587568 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.587594 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.587609 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.605443 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.628033 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.647205 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.665924 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.682672 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.689850 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.690073 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.690178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.690253 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.690319 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.697187 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.708737 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.721569 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.733895 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.749020 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.761004 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.774793 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.792815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.792930 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.792950 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.792981 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.793001 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.795585 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.812711 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.895986 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.896828 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.896918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.897009 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.897098 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.999440 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.999789 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:47 crc kubenswrapper[5116]: I1212 16:16:47.999900 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:47.999991 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.000093 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:47Z","lastTransitionTime":"2025-12-12T16:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.102794 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.102852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.102863 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.102883 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.102897 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.204984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.205050 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.205070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.205093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.205127 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.308449 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.310065 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.310178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.310239 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.310266 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.413338 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.413386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.413397 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.413411 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.413421 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.487078 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.517355 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.517439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.517458 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.517488 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.517511 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.620772 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.620847 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.620865 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.620891 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.620911 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.711082 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.711216 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.711235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.711263 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.711282 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.725763 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.731837 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.731907 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.731928 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.731951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.731969 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.746500 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.752288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.752349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.752360 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.752380 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.752393 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.766019 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.770198 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.770247 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.770260 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.770281 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.770294 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.781241 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.785621 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.785669 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.785685 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.785703 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.785715 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.796698 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"56a9ea63-479b-430c-9c05-3bf8c2deb332\\\",\\\"systemUUID\\\":\\\"26268ba2-1151-4589-80cf-5071a8d9f1b0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5116]: E1212 16:16:48.797325 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.799147 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.799216 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.799232 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.799257 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.799275 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.902504 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.902575 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.902596 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.902619 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:48 crc kubenswrapper[5116]: I1212 16:16:48.902634 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:48Z","lastTransitionTime":"2025-12-12T16:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.005135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.005194 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.005206 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.005229 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.005243 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.044247 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.044247 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.044265 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.044616 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:49 crc kubenswrapper[5116]: E1212 16:16:49.044429 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:49 crc kubenswrapper[5116]: E1212 16:16:49.044603 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:49 crc kubenswrapper[5116]: E1212 16:16:49.044718 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:49 crc kubenswrapper[5116]: E1212 16:16:49.044856 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.107622 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.108010 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.108021 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.108036 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.108046 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.212336 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.212383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.212394 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.212410 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.212423 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.314187 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.314507 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.314519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.314533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.314542 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.418826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.418898 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.418916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.418938 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.418954 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.492860 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.508279 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.521794 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.521865 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.521881 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.521913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.521928 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.523374 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.534414 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.546917 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.570016 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.584295 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.602542 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.615807 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.624718 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.624812 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.624842 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.624878 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.624905 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.629800 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.643263 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.659965 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.673403 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.683883 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.693079 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.706687 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.716928 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.726414 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.726946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.726994 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.727005 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.727021 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.727031 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.749101 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.760024 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.829501 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.829566 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.829583 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.829611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.829628 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.933207 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.933272 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.933287 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.933305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:49 crc kubenswrapper[5116]: I1212 16:16:49.933317 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:49Z","lastTransitionTime":"2025-12-12T16:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.036351 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.036430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.036453 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.036479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.036497 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.139169 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.139721 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.139739 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.139760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.139776 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.244649 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.244712 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.244725 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.244742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.244758 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.347402 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.347627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.347765 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.347905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.348039 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.451222 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.451636 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.451738 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.451846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.451945 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.498604 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-plb9v" event={"ID":"af830c5e-c623-45f9-978d-bab9a3fdbd6c","Type":"ContainerStarted","Data":"64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.500517 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" exitCode=0 Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.500666 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.519043 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.534292 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.550639 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.556345 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.556400 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.556411 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.556430 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.556444 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.560477 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.570244 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.588573 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.605688 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.620343 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.634562 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.655463 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.660424 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.660454 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.660462 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.660482 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.660502 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.668704 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.682221 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.690495 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.701198 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.711161 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.724581 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.740872 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.751770 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.762190 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.762249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.762264 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.762283 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.762297 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.767936 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.779431 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.792858 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.808264 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.824033 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.837902 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.837999 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.838024 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838076 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.838046519 +0000 UTC m=+137.302258265 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.838149 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.838198 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838270 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838308 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838353 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.838316056 +0000 UTC m=+137.302527812 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838354 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838387 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838388 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.838365567 +0000 UTC m=+137.302577533 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838401 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838413 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838453 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838467 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838470 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.838450779 +0000 UTC m=+137.302662535 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.838521 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.838509981 +0000 UTC m=+137.302721737 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.841377 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.852755 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.864836 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.864916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.864940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.864969 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.864989 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.865521 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.877879 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.892225 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.905432 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.917613 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.942858 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.943056 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: E1212 16:16:50.943203 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs podName:eb955636-d9f0-41af-b498-6d380bb8ad2f nodeName:}" failed. No retries permitted until 2025-12-12 16:17:22.943181603 +0000 UTC m=+137.407393359 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs") pod "network-metrics-daemon-gbh7p" (UID: "eb955636-d9f0-41af-b498-6d380bb8ad2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.950001 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.967192 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.967266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.967282 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.967304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.967318 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:50Z","lastTransitionTime":"2025-12-12T16:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:50 crc kubenswrapper[5116]: I1212 16:16:50.977399 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.007031 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.022609 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.032707 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.043154 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.044336 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:51 crc kubenswrapper[5116]: E1212 16:16:51.044470 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.044332 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.044657 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:51 crc kubenswrapper[5116]: E1212 16:16:51.044789 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:51 crc kubenswrapper[5116]: E1212 16:16:51.044803 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.045024 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:51 crc kubenswrapper[5116]: E1212 16:16:51.045328 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.067933 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.068858 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.068902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.068918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.068943 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.068967 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.086796 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.170760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.171127 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.171231 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.171309 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.171387 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.273890 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.274272 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.274365 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.274473 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.274562 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.377915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.377976 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.377989 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.378014 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.378028 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.480854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.481362 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.481381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.481401 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.481414 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513411 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513495 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513507 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513536 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513547 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.513580 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.584611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.584678 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.584689 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.584708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.584723 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.687792 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.687852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.687864 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.687882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.687896 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.790765 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.790856 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.790876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.790908 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.790930 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.894096 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.894220 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.894248 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.894288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.894315 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.995795 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.995842 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.995853 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.995872 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:51 crc kubenswrapper[5116]: I1212 16:16:51.995881 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:51Z","lastTransitionTime":"2025-12-12T16:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.097461 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.097512 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.097521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.097536 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.097546 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.199902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.199950 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.199963 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.199982 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.199994 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.302844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.302959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.302978 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.303075 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.303096 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.404929 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.404973 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.404982 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.404995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.405006 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.507015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.507058 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.507068 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.507083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.507093 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.610329 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.610439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.610467 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.610505 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.610535 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.712997 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.713085 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.713099 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.713134 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.713146 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.815418 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.815498 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.815516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.815536 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.815549 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.917595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.917649 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.917661 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.917677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:52 crc kubenswrapper[5116]: I1212 16:16:52.917687 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:52Z","lastTransitionTime":"2025-12-12T16:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.020161 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.020224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.020237 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.020256 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.020270 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.045012 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.045053 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.045053 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:53 crc kubenswrapper[5116]: E1212 16:16:53.045193 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:53 crc kubenswrapper[5116]: E1212 16:16:53.045286 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:53 crc kubenswrapper[5116]: E1212 16:16:53.045341 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.045415 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:53 crc kubenswrapper[5116]: E1212 16:16:53.045533 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.122645 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.122709 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.122723 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.122744 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.122760 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.225718 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.225780 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.225795 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.225814 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.225827 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.328548 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.328611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.328626 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.328647 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.328663 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.430938 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.431018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.431035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.431055 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.431067 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.532714 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.532769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.532782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.532801 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.532814 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.634689 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.634767 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.634784 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.634803 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.634817 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.737465 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.737507 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.737516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.737530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.737540 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.840396 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.840460 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.840471 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.840487 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.840497 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.942923 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.943392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.943565 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.943659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:53 crc kubenswrapper[5116]: I1212 16:16:53.943736 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:53Z","lastTransitionTime":"2025-12-12T16:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.045985 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.046024 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.046036 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.046048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.046058 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.148660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.148709 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.148722 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.148741 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.148751 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.251043 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.251084 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.251095 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.251130 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.251142 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.353727 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.353770 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.353781 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.353796 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.353806 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.456345 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.456412 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.456432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.456456 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.456476 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.539600 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.541192 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737" exitCode=0 Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.541251 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.543674 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558651 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558776 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558871 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558899 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.558914 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.573123 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.586744 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.600424 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.614375 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.633403 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.650004 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661493 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661544 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661569 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661580 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.661631 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.675596 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.686029 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.695634 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.714530 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.730064 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.742072 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.753679 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.764151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.764211 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.764230 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.764253 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.764270 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.766486 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.782157 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.794820 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.804487 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.827908 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.843045 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.855420 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.866068 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.866117 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.866127 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.866146 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.866156 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.869882 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.880460 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.890935 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.903033 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.914182 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.925014 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.933600 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.946916 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.959166 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.967831 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.967882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.967891 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.967914 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.967926 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:54Z","lastTransitionTime":"2025-12-12T16:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.970795 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:54 crc kubenswrapper[5116]: I1212 16:16:54.995034 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.006704 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.018629 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.032274 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.045072 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.045141 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.045141 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:55 crc kubenswrapper[5116]: E1212 16:16:55.045330 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.045359 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:55 crc kubenswrapper[5116]: E1212 16:16:55.045674 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:55 crc kubenswrapper[5116]: E1212 16:16:55.045771 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:55 crc kubenswrapper[5116]: E1212 16:16:55.045969 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.049677 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.061809 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.069886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.069939 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.069952 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.069972 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.069986 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.172466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.172504 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.172514 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.172530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.172539 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.274748 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.275124 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.275143 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.275168 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.275184 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.377679 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.377728 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.377739 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.377754 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.377766 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.480231 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.480281 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.480291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.480310 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.480320 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.548823 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xxzkd" event={"ID":"e0adf1a1-3140-410d-a33a-79b360ff4362","Type":"ContainerStarted","Data":"aaa6bc21f66f161ac96ea24c298849a5f7e22d29611d4d00ebac769d566af1ea"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.550902 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01" exitCode=0 Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.551008 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.561794 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://aaa6bc21f66f161ac96ea24c298849a5f7e22d29611d4d00ebac769d566af1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.574467 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.583521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.583575 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.583589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.583610 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.583623 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.587280 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.599672 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.611265 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.620743 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.638383 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.651658 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.665790 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.677881 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688634 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688699 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688718 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688729 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.688861 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.698381 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.716746 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.732616 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.744889 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.755646 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.767003 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.776639 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.788780 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.791131 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.791182 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.791195 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.791211 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.791222 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.806742 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.820818 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.834902 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.844620 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.854735 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.865879 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.877026 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.885487 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://aaa6bc21f66f161ac96ea24c298849a5f7e22d29611d4d00ebac769d566af1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.894087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.894203 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.894234 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.894265 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.894292 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.897523 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.908720 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.921342 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.937142 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.950234 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.971288 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.985717 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.996521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.996583 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.996594 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.996611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.996627 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:55Z","lastTransitionTime":"2025-12-12T16:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:55 crc kubenswrapper[5116]: I1212 16:16:55.997397 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.007999 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.021615 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.032566 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.071329 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.087173 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099129 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099806 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099857 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099877 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.099889 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.114516 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.125899 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.136541 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.188486 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.197511 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://aaa6bc21f66f161ac96ea24c298849a5f7e22d29611d4d00ebac769d566af1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.202808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.202924 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.203026 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.203155 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.203249 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.209566 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.221132 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.232882 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.244842 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.255983 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.271836 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.284714 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8301e58a-03d5-4487-842d-447a6d9f2ce3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3daea7845d81f7a3667eda4dd516640188c945e83b055770de1022116484cbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23586067c533b800126cec7e1222356383fe767bbf8bd369fda7c7b19c705182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c20f8e0f4d72c85df47ce1ec547dfae6edc6101c55db676900a809acacec874f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89e24ac7aa2261844d3126914cb19a1a6e176e7041ff9e95a26f1af87895d38c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.297764 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636d922-0169-45b1-a57f-2039d1f3dec2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b48e605d5129fd110898bc35682d989964f5fe7b0fd0a9d3a0f937dc14877ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e09d0a73a0982aa08d8ae42534f8d992d02125954052cfd02101951cf6902b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.305016 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.305059 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.305070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.305087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.305098 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.309523 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bphkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e71d710-0829-4655-b88f-9318b9776228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlv5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bphkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.320324 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.349181 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.407002 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.407048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.407061 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.407080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.407138 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.509940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.510391 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.510541 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.510680 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.510822 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.558567 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerStarted","Data":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.559234 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.561095 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="26f1f943dbc8159d99913a30d399ab5d328907a71044516d51a49bf5480d1f54" exitCode=0 Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.561180 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"26f1f943dbc8159d99913a30d399ab5d328907a71044516d51a49bf5480d1f54"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.564018 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"76460744321ddcbb08e28bc0e08acf65dbb7f6950c8fc4a75249e31e9ba25e9c"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.564080 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"8ebe8800491bbc384e40290290d51ca35dd127bbafc7a5d4fc7c7b45818431ad"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.570335 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plb9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af830c5e-c623-45f9-978d-bab9a3fdbd6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://64a5facc811da01526bcd1d24fdfa8e38385b3e828d6f9e3768efe6e5e24a26f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqphd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plb9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.579660 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb955636-d9f0-41af-b498-6d380bb8ad2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlmd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbh7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.598765 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6daf9bd9-8546-4972-8953-e77f39e3ecd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35d654566801dcd21ab21321cffdfb40ec621793116aaca391a0d6b99f4a118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a29dfc13647e25cbd38e2df140db43b6c39914cec33546b5f9a8ab4f623309b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a6343f7d51a568fa981fe65cc29ea7175565d6344e323caa0b0449c7a016215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f3a06ceab5e6a613fdcc1f44f3c6133cccdc163034ae0e6991ed2316adfc46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ac7fbeb6cbb047ecca9961f03693bfb39a2d01d0a4bcb83f2485f0f1ef8a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87abea85804961f92243dd0be5b380f34966036077671c33a77ba580ea60f7dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fa3c4e23d1f673b31807f89a66068b2b7af038a1c43de576a9a6badc4f921d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c630a0e79b6dd01ecf1d5384a0a077554719388671fa443c37415f9bb18cda8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.614593 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.614652 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.614664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.614681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.614695 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.616911 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053e68c6-626a-4d3a-9f34-a55711644dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:15:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 16:15:53.696504 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:15:53.696679 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:15:53.697847 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3482601346/tls.crt::/tmp/serving-cert-3482601346/tls.key\\\\\\\"\\\\nI1212 16:15:54.195995 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:15:54.198492 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:15:54.198517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:15:54.198549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:15:54.198554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:15:54.203872 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1212 16:15:54.203873 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:15:54.203924 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:15:54.203933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:15:54.203936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:15:54.203940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:15:54.203943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 16:15:54.207969 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.630839 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62164a7dade374bcf1e5649efebf457f1cd1f69dc7e7f588b1da6d332c108ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.646372 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.658192 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.670543 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.716395 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dc59b6e-f5c7-4ae5-b5e6-10d4fed6f72e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e44baeeefabb12b6f0a61b4660979313fffe81540036201c03bdb35492d80d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd4afbebcbfe83568ff0a8ffd2531a34ad8376423321774aee0dd4c6487a1f9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://67a806000a140c2f8f66c600f01c0753792e18c6fb8881f92144e13a0c71cd2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.717524 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.717594 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.717609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.717629 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.717642 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.752875 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xxzkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0adf1a1-3140-410d-a33a-79b360ff4362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://aaa6bc21f66f161ac96ea24c298849a5f7e22d29611d4d00ebac769d566af1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5lvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xxzkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.791627 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fedd19a-ed2a-4e65-a3ad-e104203261fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c41f3e32f084b600d29f1271362cfdd832a4bc936ea779e25e31f8e58b07df9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bb58t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.822918 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.823272 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.823412 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.823531 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.823655 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.836341 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3252cf25-4bc0-4262-923c-20bb5a19f1cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-str5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-fl6jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.878294 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"814309ea-c9dc-4630-acd2-43b66b028bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fd565d63a38e56c3eccafa28c5eabc321ba5d00ae18b6a5925abc51e380737\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12faf32e88e173de1353ddc003891afde3955a6757c222a47dd8d50b83a61f01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-82wdg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-84wvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.916831 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.926249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.926457 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.926635 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.926759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.926903 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:56Z","lastTransitionTime":"2025-12-12T16:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:56 crc kubenswrapper[5116]: I1212 16:16:56.953631 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b7d21cadb0b28fda47cbe5b55e33415830773175a7455aa565e306d29d62866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.005953 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789dbc62-9a37-4521-89a5-476e80e7beb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:16:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgwxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fg2lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.044723 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:57 crc kubenswrapper[5116]: E1212 16:16:57.044874 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.044926 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.044931 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:57 crc kubenswrapper[5116]: E1212 16:16:57.044977 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:57 crc kubenswrapper[5116]: E1212 16:16:57.045072 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.045578 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:57 crc kubenswrapper[5116]: E1212 16:16:57.045869 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.224054 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.224130 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.224147 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.224164 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.224176 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.298457 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.311790 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.311761418 podStartE2EDuration="39.311761418s" podCreationTimestamp="2025-12-12 16:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.277391814 +0000 UTC m=+111.741603620" watchObservedRunningTime="2025-12-12 16:16:57.311761418 +0000 UTC m=+111.775973174" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.312215 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.31220938 podStartE2EDuration="39.31220938s" podCreationTimestamp="2025-12-12 16:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.311535211 +0000 UTC m=+111.775746967" watchObservedRunningTime="2025-12-12 16:16:57.31220938 +0000 UTC m=+111.776421146" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.332611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.332656 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.332670 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.332688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.332698 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.356165 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bphkq" podStartSLOduration=91.3561421 podStartE2EDuration="1m31.3561421s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.333160423 +0000 UTC m=+111.797372189" watchObservedRunningTime="2025-12-12 16:16:57.3561421 +0000 UTC m=+111.820353856" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.412468 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podStartSLOduration=91.412450384 podStartE2EDuration="1m31.412450384s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.411749654 +0000 UTC m=+111.875961400" watchObservedRunningTime="2025-12-12 16:16:57.412450384 +0000 UTC m=+111.876662140" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.434757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.434822 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.434833 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.434851 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.434864 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.440705 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-plb9v" podStartSLOduration=91.440684492 podStartE2EDuration="1m31.440684492s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.428584147 +0000 UTC m=+111.892795903" watchObservedRunningTime="2025-12-12 16:16:57.440684492 +0000 UTC m=+111.904896248" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.500714 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=39.500696564 podStartE2EDuration="39.500696564s" podCreationTimestamp="2025-12-12 16:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.479956997 +0000 UTC m=+111.944168773" watchObservedRunningTime="2025-12-12 16:16:57.500696564 +0000 UTC m=+111.964908320" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.501046 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=39.501041544 podStartE2EDuration="39.501041544s" podCreationTimestamp="2025-12-12 16:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.49942287 +0000 UTC m=+111.963634646" watchObservedRunningTime="2025-12-12 16:16:57.501041544 +0000 UTC m=+111.965253300" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.536439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.536486 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.536497 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.536515 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.536526 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.570399 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="56e0a1af48f2e732cd60cdc2edd39e61b54d9f42a03238abfda7b9a21d23abd7" exitCode=0 Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.570479 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"56e0a1af48f2e732cd60cdc2edd39e61b54d9f42a03238abfda7b9a21d23abd7"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.571129 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.571156 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.602966 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.639984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.640058 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.640071 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.640088 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.640098 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.645504 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=39.645463934 podStartE2EDuration="39.645463934s" podCreationTimestamp="2025-12-12 16:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.645408962 +0000 UTC m=+112.109620718" watchObservedRunningTime="2025-12-12 16:16:57.645463934 +0000 UTC m=+112.109675690" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.673844 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xxzkd" podStartSLOduration=91.673815875 podStartE2EDuration="1m31.673815875s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.673009984 +0000 UTC m=+112.137221760" watchObservedRunningTime="2025-12-12 16:16:57.673815875 +0000 UTC m=+112.138027631" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.714978 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podStartSLOduration=91.714944471 podStartE2EDuration="1m31.714944471s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.713159962 +0000 UTC m=+112.177371738" watchObservedRunningTime="2025-12-12 16:16:57.714944471 +0000 UTC m=+112.179156247" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.742886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.742946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.742958 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.742986 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.743000 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.800596 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podStartSLOduration=91.800577861 podStartE2EDuration="1m31.800577861s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:57.755912271 +0000 UTC m=+112.220124037" watchObservedRunningTime="2025-12-12 16:16:57.800577861 +0000 UTC m=+112.264789617" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.846302 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.846355 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.846367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.846382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.846393 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.949777 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.950235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.950499 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.950704 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:57 crc kubenswrapper[5116]: I1212 16:16:57.950892 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:57Z","lastTransitionTime":"2025-12-12T16:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.053611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.053688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.053708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.053740 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.053762 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.157093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.157159 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.157170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.157186 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.157195 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.259901 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.259992 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.260011 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.260034 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.260050 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.363298 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.363348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.363358 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.363376 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.363388 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.465732 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.465769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.465778 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.465793 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.465802 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.568009 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.568055 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.568073 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.568092 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.568102 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.576358 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerStarted","Data":"0bcbb8a8a8481d6d161ae0f9621c472b233ee8005e4d0b07f2641427fb36a029"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.669785 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.669839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.669854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.669869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.669879 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.772401 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.772454 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.772465 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.772482 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.772494 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.816968 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.817041 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.817067 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.817101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.817176 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:58Z","lastTransitionTime":"2025-12-12T16:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:58 crc kubenswrapper[5116]: I1212 16:16:58.871781 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv"] Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.035818 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.039779 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.040574 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.040823 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.041779 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.043362 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.044603 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.044691 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:59 crc kubenswrapper[5116]: E1212 16:16:59.044693 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.044603 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:59 crc kubenswrapper[5116]: E1212 16:16:59.044874 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:16:59 crc kubenswrapper[5116]: E1212 16:16:59.044973 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.044721 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:59 crc kubenswrapper[5116]: E1212 16:16:59.045078 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.048469 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gbh7p"] Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.057612 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.143011 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b27e61-2de9-4fb6-950b-fa89459a6f40-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.143361 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39b27e61-2de9-4fb6-950b-fa89459a6f40-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.143413 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.143431 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39b27e61-2de9-4fb6-950b-fa89459a6f40-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.143451 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245417 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245465 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39b27e61-2de9-4fb6-950b-fa89459a6f40-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245487 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245557 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b27e61-2de9-4fb6-950b-fa89459a6f40-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245600 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39b27e61-2de9-4fb6-950b-fa89459a6f40-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245569 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.245717 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/39b27e61-2de9-4fb6-950b-fa89459a6f40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.246462 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39b27e61-2de9-4fb6-950b-fa89459a6f40-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.259590 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b27e61-2de9-4fb6-950b-fa89459a6f40-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.264226 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39b27e61-2de9-4fb6-950b-fa89459a6f40-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-8x6lv\" (UID: \"39b27e61-2de9-4fb6-950b-fa89459a6f40\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.416999 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.584132 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" event={"ID":"39b27e61-2de9-4fb6-950b-fa89459a6f40","Type":"ContainerStarted","Data":"a176fa68ae764cd912e9d62d528e555e9e9a62cb62d93dd7c3653bdf1b672ebc"} Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.588974 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="0bcbb8a8a8481d6d161ae0f9621c472b233ee8005e4d0b07f2641427fb36a029" exitCode=0 Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.589115 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"0bcbb8a8a8481d6d161ae0f9621c472b233ee8005e4d0b07f2641427fb36a029"} Dec 12 16:16:59 crc kubenswrapper[5116]: I1212 16:16:59.589257 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:16:59 crc kubenswrapper[5116]: E1212 16:16:59.589544 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:17:00 crc kubenswrapper[5116]: I1212 16:17:00.593781 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" event={"ID":"39b27e61-2de9-4fb6-950b-fa89459a6f40","Type":"ContainerStarted","Data":"2f83eafc84829f6562bcc24b407f6d1c7f1257a519a4b3fd0e0f1696ddef87b2"} Dec 12 16:17:00 crc kubenswrapper[5116]: I1212 16:17:00.597075 5116 generic.go:358] "Generic (PLEG): container finished" podID="814309ea-c9dc-4630-acd2-43b66b028bd5" containerID="f25376dc5d9c9d0aebf251a90dc6089008c0a8c7220026b8ded403e826fb933f" exitCode=0 Dec 12 16:17:00 crc kubenswrapper[5116]: I1212 16:17:00.597166 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerDied","Data":"f25376dc5d9c9d0aebf251a90dc6089008c0a8c7220026b8ded403e826fb933f"} Dec 12 16:17:00 crc kubenswrapper[5116]: I1212 16:17:00.613889 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-8x6lv" podStartSLOduration=94.613866645 podStartE2EDuration="1m34.613866645s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:00.609046466 +0000 UTC m=+115.073258222" watchObservedRunningTime="2025-12-12 16:17:00.613866645 +0000 UTC m=+115.078078401" Dec 12 16:17:01 crc kubenswrapper[5116]: I1212 16:17:01.044717 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:01 crc kubenswrapper[5116]: I1212 16:17:01.044717 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:01 crc kubenswrapper[5116]: E1212 16:17:01.045168 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:17:01 crc kubenswrapper[5116]: I1212 16:17:01.044809 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:01 crc kubenswrapper[5116]: I1212 16:17:01.044776 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:01 crc kubenswrapper[5116]: E1212 16:17:01.045321 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:17:01 crc kubenswrapper[5116]: E1212 16:17:01.045448 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:17:01 crc kubenswrapper[5116]: E1212 16:17:01.045523 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:17:01 crc kubenswrapper[5116]: I1212 16:17:01.612751 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-84wvk" event={"ID":"814309ea-c9dc-4630-acd2-43b66b028bd5","Type":"ContainerStarted","Data":"4ee2c557b6941841b82bdff543fa6a33e8fcff2917df78918f308ffec32dd541"} Dec 12 16:17:03 crc kubenswrapper[5116]: I1212 16:17:03.044614 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:03 crc kubenswrapper[5116]: I1212 16:17:03.044681 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:03 crc kubenswrapper[5116]: I1212 16:17:03.044737 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:03 crc kubenswrapper[5116]: E1212 16:17:03.044988 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbh7p" podUID="eb955636-d9f0-41af-b498-6d380bb8ad2f" Dec 12 16:17:03 crc kubenswrapper[5116]: E1212 16:17:03.045097 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:17:03 crc kubenswrapper[5116]: E1212 16:17:03.045226 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:17:03 crc kubenswrapper[5116]: I1212 16:17:03.045286 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:03 crc kubenswrapper[5116]: E1212 16:17:03.045381 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.228307 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.228530 5116 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.275681 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-84wvk" podStartSLOduration=98.275660046 podStartE2EDuration="1m38.275660046s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:01.649034677 +0000 UTC m=+116.113246503" watchObservedRunningTime="2025-12-12 16:17:04.275660046 +0000 UTC m=+118.739871822" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.276192 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pq598"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.802267 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.802510 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.806208 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.806306 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.806388 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.806821 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.808635 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.809210 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.813888 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.813892 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.814707 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.826214 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.827067 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.834979 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.835058 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.835250 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.835287 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.835424 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.836496 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.838487 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.839291 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.849344 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-svwnw"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.850192 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.854040 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-g5nbl"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.854831 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.855166 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.855551 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.855840 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.854263 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.856702 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.856813 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.856906 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.856983 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.857066 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.857738 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.857913 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.858465 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lw784"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.859332 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.861352 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.861482 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.862300 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.862447 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.865348 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.865499 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.865682 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.865716 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.865880 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.866152 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.866408 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.866630 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.866687 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.866840 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.867184 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.871750 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.871931 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.872060 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.873037 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875090 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875338 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875403 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875539 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875669 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875858 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.875980 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.876176 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.876396 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.876631 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.876734 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.877767 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.877811 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.878023 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.878945 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-87slr"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.883601 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.884918 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885226 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885385 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885482 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885505 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885773 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.886005 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.886208 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885786 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.885633 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.886445 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.888147 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.888809 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.889004 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.890075 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.890093 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.891974 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qh8zt"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.892191 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.892326 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.892584 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.897145 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.900336 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.900649 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.901220 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.901315 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.901453 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.901582 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.903078 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.903148 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.903302 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.904837 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.905935 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-24z4m"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.907146 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.908274 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.908739 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.908772 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.911211 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.911296 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.911380 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912119 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912361 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912482 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912496 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912777 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.912879 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.913504 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.913260 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.913315 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.913657 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.913691 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.914137 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.914262 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.915835 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.916324 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.916604 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.916736 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.917023 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.919755 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-k9w8q"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.921757 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.925326 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.926022 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.926920 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.927961 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930198 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cb5172ad-e8a1-4893-a33f-9e95b26fd720-machine-approver-tls\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930240 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-serving-ca\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930258 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-encryption-config\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930310 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-audit-policies\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930329 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-serving-cert\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930352 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q77c\" (UniqueName: \"kubernetes.io/projected/505ad756-8433-456f-8a6a-d391d7da9b1c-kube-api-access-7q77c\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930380 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-client\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930429 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mft2\" (UniqueName: \"kubernetes.io/projected/cb5172ad-e8a1-4893-a33f-9e95b26fd720-kube-api-access-4mft2\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930453 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wtd9\" (UniqueName: \"kubernetes.io/projected/57dbe731-30cc-45f4-b457-346f62af94fa-kube-api-access-2wtd9\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930472 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505ad756-8433-456f-8a6a-d391d7da9b1c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930494 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-auth-proxy-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930520 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930557 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57dbe731-30cc-45f4-b457-346f62af94fa-audit-dir\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930578 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.930599 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505ad756-8433-456f-8a6a-d391d7da9b1c-config\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.932215 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.932407 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.935433 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-qhrd4"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.935624 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.938644 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pq598"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.938672 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.938684 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-h7g5m"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.938741 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.942498 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.942529 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-sd7g8"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.943279 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.944568 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.949589 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.949808 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.953218 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.953308 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.958786 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.958948 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.962302 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.962476 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.964948 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.966944 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.967161 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.970038 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.970197 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.972515 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-dgpvm"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.972631 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.974899 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.975029 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.979317 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.979626 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.982155 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.982369 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.985189 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.985772 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.985940 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.996626 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p"] Dec 12 16:17:04 crc kubenswrapper[5116]: I1212 16:17:04.996861 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.005306 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.005524 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.005988 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.011134 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tbppz"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.011402 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.014901 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-97glr"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.015270 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.018705 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.018965 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-97glr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.023162 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.023403 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.025035 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.026037 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.026064 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.026073 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.026082 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-g5nbl"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.026093 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4v8b9"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.028559 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-p4pvw"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.028708 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.029123 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031283 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0315e170-e93e-4945-89e2-3e5e56e0d317-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031324 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cb5172ad-e8a1-4893-a33f-9e95b26fd720-machine-approver-tls\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031348 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-serving-ca\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031411 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjsrl\" (UniqueName: \"kubernetes.io/projected/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-kube-api-access-xjsrl\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031451 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031477 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-encryption-config\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031538 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bed86e5d-77df-45bf-ae08-16b99f150f6d-webhook-certs\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031571 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031594 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031614 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031634 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3e58-1eef-468b-84a4-5b3071698628-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031657 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff66a0b-6756-4fd2-8fa5-756289614a15-serving-cert\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031672 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031689 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-trusted-ca-bundle\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031709 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031728 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0315e170-e93e-4945-89e2-3e5e56e0d317-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031746 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031769 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-service-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031795 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-audit-policies\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031810 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-serving-cert\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031845 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031905 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7q77c\" (UniqueName: \"kubernetes.io/projected/505ad756-8433-456f-8a6a-d391d7da9b1c-kube-api-access-7q77c\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031925 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcm4s\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-kube-api-access-zcm4s\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031948 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-client\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031964 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6019f-ecaf-43cc-8df2-cddce4345203-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.031984 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7c8\" (UniqueName: \"kubernetes.io/projected/4a88744b-ced0-4609-bede-f65d27510b47-kube-api-access-gf7c8\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032001 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032016 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032033 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032049 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032078 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032121 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0315e170-e93e-4945-89e2-3e5e56e0d317-config\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032149 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-oauth-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032148 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-serving-ca\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032302 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032333 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032354 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr65s\" (UniqueName: \"kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032372 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032390 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e8310b-4d7c-4c19-82af-587b427fc159-config\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032411 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032446 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032482 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm54d\" (UniqueName: \"kubernetes.io/projected/572c6180-44e5-4299-afe5-a5483f6e0711-kube-api-access-vm54d\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mft2\" (UniqueName: \"kubernetes.io/projected/cb5172ad-e8a1-4893-a33f-9e95b26fd720-kube-api-access-4mft2\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032539 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-config\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032588 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032607 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e8310b-4d7c-4c19-82af-587b427fc159-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032624 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghjf\" (UniqueName: \"kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032794 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8pz\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-kube-api-access-7x8pz\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032852 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wtd9\" (UniqueName: \"kubernetes.io/projected/57dbe731-30cc-45f4-b457-346f62af94fa-kube-api-access-2wtd9\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032929 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-k9w8q"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032946 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-24z4m"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032944 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505ad756-8433-456f-8a6a-d391d7da9b1c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032959 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032968 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.032977 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qhrd4"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033031 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px8sg\" (UniqueName: \"kubernetes.io/projected/bed86e5d-77df-45bf-ae08-16b99f150f6d-kube-api-access-px8sg\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033039 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033053 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033073 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-auth-proxy-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033054 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qh8zt"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033128 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033142 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033155 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lw784"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033168 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033170 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033181 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-87slr"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033193 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033205 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-svwnw"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033219 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033249 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033293 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj7b8\" (UniqueName: \"kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033328 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-serving-cert\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033305 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033379 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033383 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-audit-policies\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033398 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dgpvm"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033451 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033465 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033477 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-648v2"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033504 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a88744b-ced0-4609-bede-f65d27510b47-metrics-tls\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033607 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-serving-cert\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033647 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-client\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033716 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033746 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-config\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033763 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033787 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033853 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-service-ca\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033877 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033897 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-config\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034282 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/572c6180-44e5-4299-afe5-a5483f6e0711-tmp-dir\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.033955 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-auth-proxy-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034306 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-trusted-ca\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034360 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc48s\" (UniqueName: \"kubernetes.io/projected/24053646-aeb7-426b-8065-63075e9aa0c8-kube-api-access-mc48s\") pod \"downloads-747b44746d-g5nbl\" (UID: \"24053646-aeb7-426b-8065-63075e9aa0c8\") " pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034416 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034435 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57dbe731-30cc-45f4-b457-346f62af94fa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034437 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2e8310b-4d7c-4c19-82af-587b427fc159-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034482 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7xx9\" (UniqueName: \"kubernetes.io/projected/d2e8310b-4d7c-4c19-82af-587b427fc159-kube-api-access-h7xx9\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034517 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57dbe731-30cc-45f4-b457-346f62af94fa-audit-dir\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034536 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a88744b-ced0-4609-bede-f65d27510b47-tmp-dir\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034556 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vpr\" (UniqueName: \"kubernetes.io/projected/1da6019f-ecaf-43cc-8df2-cddce4345203-kube-api-access-g2vpr\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034574 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57dbe731-30cc-45f4-b457-346f62af94fa-audit-dir\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034577 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034631 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034657 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034681 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034705 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034731 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghz4\" (UniqueName: \"kubernetes.io/projected/eff66a0b-6756-4fd2-8fa5-756289614a15-kube-api-access-rghz4\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034755 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-oauth-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034779 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034805 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505ad756-8433-456f-8a6a-d391d7da9b1c-config\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0315e170-e93e-4945-89e2-3e5e56e0d317-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034941 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034963 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034982 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a8a3e58-1eef-468b-84a4-5b3071698628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.034997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.035019 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5172ad-e8a1-4893-a33f-9e95b26fd720-config\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.035478 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505ad756-8433-456f-8a6a-d391d7da9b1c-config\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037357 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037382 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037393 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037403 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-97glr"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037414 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037426 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037437 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-h7g5m"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037446 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037454 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-648v2"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037463 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4v8b9"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037472 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037480 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng"] Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.037534 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.038284 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505ad756-8433-456f-8a6a-d391d7da9b1c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.038706 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cb5172ad-e8a1-4893-a33f-9e95b26fd720-machine-approver-tls\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.039348 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-serving-cert\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.039476 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-etcd-client\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.041003 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57dbe731-30cc-45f4-b457-346f62af94fa-encryption-config\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.044091 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.044229 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.044233 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.044502 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.045454 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.085477 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.104665 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.125205 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135770 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-px8sg\" (UniqueName: \"kubernetes.io/projected/bed86e5d-77df-45bf-ae08-16b99f150f6d-kube-api-access-px8sg\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135812 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135843 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135866 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xj7b8\" (UniqueName: \"kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135891 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-serving-cert\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135918 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a88744b-ced0-4609-bede-f65d27510b47-metrics-tls\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135941 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-serving-cert\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135965 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-client\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.135993 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-config\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136015 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136045 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136071 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-service-ca\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136095 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136140 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-config\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136163 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/572c6180-44e5-4299-afe5-a5483f6e0711-tmp-dir\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136190 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-trusted-ca\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136405 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136876 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mc48s\" (UniqueName: \"kubernetes.io/projected/24053646-aeb7-426b-8065-63075e9aa0c8-kube-api-access-mc48s\") pod \"downloads-747b44746d-g5nbl\" (UID: \"24053646-aeb7-426b-8065-63075e9aa0c8\") " pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136942 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2e8310b-4d7c-4c19-82af-587b427fc159-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136959 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7xx9\" (UniqueName: \"kubernetes.io/projected/d2e8310b-4d7c-4c19-82af-587b427fc159-kube-api-access-h7xx9\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.136996 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a88744b-ced0-4609-bede-f65d27510b47-tmp-dir\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137015 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2vpr\" (UniqueName: \"kubernetes.io/projected/1da6019f-ecaf-43cc-8df2-cddce4345203-kube-api-access-g2vpr\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137030 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137051 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137081 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137097 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137130 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rghz4\" (UniqueName: \"kubernetes.io/projected/eff66a0b-6756-4fd2-8fa5-756289614a15-kube-api-access-rghz4\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137166 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-oauth-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137182 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137199 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0315e170-e93e-4945-89e2-3e5e56e0d317-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137223 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-config\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137217 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137284 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137302 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a8a3e58-1eef-468b-84a4-5b3071698628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137320 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137339 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0315e170-e93e-4945-89e2-3e5e56e0d317-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137367 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjsrl\" (UniqueName: \"kubernetes.io/projected/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-kube-api-access-xjsrl\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137384 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137424 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bed86e5d-77df-45bf-ae08-16b99f150f6d-webhook-certs\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137439 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137454 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137469 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137491 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3e58-1eef-468b-84a4-5b3071698628-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137516 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff66a0b-6756-4fd2-8fa5-756289614a15-serving-cert\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137530 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137544 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-trusted-ca-bundle\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137561 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137576 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0315e170-e93e-4945-89e2-3e5e56e0d317-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137598 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137616 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-service-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137640 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137657 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137679 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zcm4s\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-kube-api-access-zcm4s\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137700 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6019f-ecaf-43cc-8df2-cddce4345203-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137708 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137720 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7c8\" (UniqueName: \"kubernetes.io/projected/4a88744b-ced0-4609-bede-f65d27510b47-kube-api-access-gf7c8\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137738 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137754 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137771 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137786 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137804 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137829 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0315e170-e93e-4945-89e2-3e5e56e0d317-config\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137843 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-oauth-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137858 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137872 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.137888 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tr65s\" (UniqueName: \"kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138126 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138431 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138667 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-service-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138701 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a8a3e58-1eef-468b-84a4-5b3071698628-tmp\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138716 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138744 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e8310b-4d7c-4c19-82af-587b427fc159-config\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138765 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138795 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138813 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vm54d\" (UniqueName: \"kubernetes.io/projected/572c6180-44e5-4299-afe5-a5483f6e0711-kube-api-access-vm54d\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138888 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-config\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138916 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138939 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e8310b-4d7c-4c19-82af-587b427fc159-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138957 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bghjf\" (UniqueName: \"kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.138977 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8pz\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-kube-api-access-7x8pz\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.139004 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-config\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.139160 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/572c6180-44e5-4299-afe5-a5483f6e0711-tmp-dir\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.141696 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.141760 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0315e170-e93e-4945-89e2-3e5e56e0d317-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.141918 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-ca\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.142542 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eff66a0b-6756-4fd2-8fa5-756289614a15-trusted-ca\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.143038 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-serving-cert\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.143191 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.143218 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.143409 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.144047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.144759 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0315e170-e93e-4945-89e2-3e5e56e0d317-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.145167 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2e8310b-4d7c-4c19-82af-587b427fc159-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.146364 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.146355 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/572c6180-44e5-4299-afe5-a5483f6e0711-etcd-client\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.146725 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6019f-ecaf-43cc-8df2-cddce4345203-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.146998 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3e58-1eef-468b-84a4-5b3071698628-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.147089 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a88744b-ced0-4609-bede-f65d27510b47-tmp-dir\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.147221 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff66a0b-6756-4fd2-8fa5-756289614a15-serving-cert\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.147972 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.148387 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-config\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.148473 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.149209 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e8310b-4d7c-4c19-82af-587b427fc159-config\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.149641 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.149749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.149859 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.150122 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e8310b-4d7c-4c19-82af-587b427fc159-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.150302 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.151353 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.151731 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.151769 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-serving-cert\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.152524 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.152567 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.152749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0315e170-e93e-4945-89e2-3e5e56e0d317-config\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.152781 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.153378 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.154093 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.154960 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.154280 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.155224 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.166188 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.174136 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a8a3e58-1eef-468b-84a4-5b3071698628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.184789 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.205400 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.224407 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.232803 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a88744b-ced0-4609-bede-f65d27510b47-metrics-tls\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.244241 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.264269 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.284723 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.304240 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.325170 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.332117 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.351452 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.359699 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.364997 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.384330 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.392184 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-oauth-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.403940 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.425417 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.432656 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-oauth-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.445059 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.453918 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-serving-cert\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.464844 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.468708 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-console-config\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.484899 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.491980 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-service-ca\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.511563 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.519009 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-trusted-ca-bundle\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.525034 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.544843 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.553278 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bed86e5d-77df-45bf-ae08-16b99f150f6d-webhook-certs\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.584466 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.604409 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.625183 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.645558 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.664724 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.686435 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.705346 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.725886 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.746547 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.764732 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.786385 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.804810 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.824833 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.845994 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.865692 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.884598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.905024 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.925466 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.944251 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.965025 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.983436 5116 request.go:752] "Waited before sending request" delay="1.012986565s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0" Dec 12 16:17:05 crc kubenswrapper[5116]: I1212 16:17:05.984877 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.005625 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.027071 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.045702 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.064984 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.084562 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.105026 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.126091 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.144694 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.165411 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.184939 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.204197 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.225529 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.245098 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.265511 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.296138 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.305294 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.325076 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.345281 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.364802 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.385732 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.405606 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.425353 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.445348 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.464718 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.484913 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.504335 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.524715 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.545862 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.565795 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.585317 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.605832 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.624586 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.644757 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.665529 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.685930 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.705226 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.724836 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.745283 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.765254 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.784877 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.804932 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.825322 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.905034 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.925622 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.946149 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.964907 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 16:17:06 crc kubenswrapper[5116]: I1212 16:17:06.985069 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.004795 5116 request.go:752] "Waited before sending request" delay="1.967014687s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.006978 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.024669 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.044893 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.065320 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.085155 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.106222 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.125204 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.187304 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-px8sg\" (UniqueName: \"kubernetes.io/projected/bed86e5d-77df-45bf-ae08-16b99f150f6d-kube-api-access-px8sg\") pod \"multus-admission-controller-69db94689b-h7g5m\" (UID: \"bed86e5d-77df-45bf-ae08-16b99f150f6d\") " pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.238369 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.442615 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8pz\" (UniqueName: \"kubernetes.io/projected/6a8a3e58-1eef-468b-84a4-5b3071698628-kube-api-access-7x8pz\") pod \"cluster-image-registry-operator-86c45576b9-g52qp\" (UID: \"6a8a3e58-1eef-468b-84a4-5b3071698628\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.502349 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.525282 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.545903 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.565769 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.578915 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.578992 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5297fd-58f7-4678-94d1-6afb8b1639cf-serving-cert\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579026 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579062 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f5297fd-58f7-4678-94d1-6afb8b1639cf-available-featuregates\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579144 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbbz\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579275 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.579677 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.079659454 +0000 UTC m=+122.543871220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579773 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579837 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-encryption-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579884 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579941 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-audit-dir\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.579968 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580050 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580071 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-image-import-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580092 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgztn\" (UniqueName: \"kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580158 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580184 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-audit\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580220 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580240 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-images\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580277 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p88j\" (UniqueName: \"kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580306 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580447 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-serving-cert\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580490 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-client\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580510 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgx5\" (UniqueName: \"kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580603 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-config\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.580699 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.605117 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.625065 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.645714 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.665017 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682041 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.682324 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.182273431 +0000 UTC m=+122.646485237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682773 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-audit-dir\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682836 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-stats-auth\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682883 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmg2n\" (UniqueName: \"kubernetes.io/projected/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-kube-api-access-lmg2n\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682899 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-audit-dir\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.682928 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f177940-05e7-4aec-a952-b0eafdc0d9c2-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683057 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683132 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683240 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d377873-6680-42c5-afb1-52f63ffff4a4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683190 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683350 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgztn\" (UniqueName: \"kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683382 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjxl\" (UniqueName: \"kubernetes.io/projected/f1477647-214b-4ac9-9c1c-bafb5b506eb3-kube-api-access-4mjxl\") pod \"migrator-866fcbc849-qfd7p\" (UID: \"f1477647-214b-4ac9-9c1c-bafb5b506eb3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683411 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2fl\" (UniqueName: \"kubernetes.io/projected/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-kube-api-access-7p2fl\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683435 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683460 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683515 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed4930f4-4d37-415e-a712-9574322f6ccc-metrics-tls\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683624 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95823ee2-7080-4b23-87d9-e69d42ab1787-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683745 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683784 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2p88j\" (UniqueName: \"kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683807 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683827 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-cabundle\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683906 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgx5\" (UniqueName: \"kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.683990 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gh82\" (UniqueName: \"kubernetes.io/projected/b759840e-6855-49b1-b4e5-51143b00c6eb-kube-api-access-9gh82\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684101 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684258 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4930f4-4d37-415e-a712-9574322f6ccc-config-volume\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684316 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ed202e-423f-478d-882c-f49ad62a8660-serving-cert\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684369 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684559 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684682 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-srv-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684787 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f5297fd-58f7-4678-94d1-6afb8b1639cf-available-featuregates\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-csi-data-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684865 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wh8c\" (UniqueName: \"kubernetes.io/projected/64b5e43d-0337-46b2-b4be-93dbb15ef982-kube-api-access-2wh8c\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.684903 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685033 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed4930f4-4d37-415e-a712-9574322f6ccc-tmp-dir\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685191 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f5297fd-58f7-4678-94d1-6afb8b1639cf-available-featuregates\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685099 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685428 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-serving-cert\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685571 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.685605 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.685969 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.18595671 +0000 UTC m=+122.650168486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.686298 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.686586 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-certs\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.686704 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64b5e43d-0337-46b2-b4be-93dbb15ef982-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.686768 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92lhf\" (UniqueName: \"kubernetes.io/projected/0df42631-1a1c-4104-9f9d-9e197f7cbe33-kube-api-access-92lhf\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.686856 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33e2a61-8e55-4309-be5e-82581b191636-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687242 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmrc\" (UniqueName: \"kubernetes.io/projected/ed4930f4-4d37-415e-a712-9574322f6ccc-kube-api-access-krmrc\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687358 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fkp\" (UniqueName: \"kubernetes.io/projected/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-kube-api-access-l9fkp\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687526 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-encryption-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687614 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687661 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687733 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687769 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-tmpfs\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687827 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-srv-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687863 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smcql\" (UniqueName: \"kubernetes.io/projected/98ed202e-423f-478d-882c-f49ad62a8660-kube-api-access-smcql\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687919 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.687960 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688055 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-serving-cert\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688095 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7x6\" (UniqueName: \"kubernetes.io/projected/957f59ba-d9a7-424b-94bb-8899126450ed-kube-api-access-db7x6\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688165 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rcq\" (UniqueName: \"kubernetes.io/projected/95823ee2-7080-4b23-87d9-e69d42ab1787-kube-api-access-b4rcq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688206 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688245 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-image-import-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688272 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-images\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688297 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33e2a61-8e55-4309-be5e-82581b191636-config\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688324 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lr4\" (UniqueName: \"kubernetes.io/projected/c33e2a61-8e55-4309-be5e-82581b191636-kube-api-access-k4lr4\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688353 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-audit\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688618 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688690 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c54200-b864-4aee-abb7-f486e4bd3236-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688724 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688758 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzndp\" (UniqueName: \"kubernetes.io/projected/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-kube-api-access-rzndp\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688813 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.688917 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-images\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689036 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c54200-b864-4aee-abb7-f486e4bd3236-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689243 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-key\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689322 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7nm\" (UniqueName: \"kubernetes.io/projected/5f177940-05e7-4aec-a952-b0eafdc0d9c2-kube-api-access-5x7nm\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689363 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-node-bootstrap-token\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689394 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-registration-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689426 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-plugins-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689467 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-client\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689589 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-config\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689672 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8w8\" (UniqueName: \"kubernetes.io/projected/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-kube-api-access-tl8w8\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689730 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689774 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689826 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkd22\" (UniqueName: \"kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.689960 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ccbbz\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690020 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690135 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690243 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690355 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690571 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5297fd-58f7-4678-94d1-6afb8b1639cf-serving-cert\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690863 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.690964 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c54200-b864-4aee-abb7-f486e4bd3236-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691058 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691129 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh89c\" (UniqueName: \"kubernetes.io/projected/980cfea9-194c-4650-9dee-7ede187c365f-kube-api-access-dh89c\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691179 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-tmpfs\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691344 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691389 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-mountpoint-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691463 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6cq\" (UniqueName: \"kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691586 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-webhook-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691646 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691683 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-tmpfs\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691807 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-socket-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691838 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ed202e-423f-478d-882c-f49ad62a8660-config\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691869 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/980cfea9-194c-4650-9dee-7ede187c365f-cert\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691889 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c54200-b864-4aee-abb7-f486e4bd3236-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691915 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.691936 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-metrics-certs\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.692150 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.704811 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.725716 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.745531 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.765492 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.771634 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0315e170-e93e-4945-89e2-3e5e56e0d317-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xfkjr\" (UID: \"0315e170-e93e-4945-89e2-3e5e56e0d317\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.785562 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.792864 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793084 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/980cfea9-194c-4650-9dee-7ede187c365f-cert\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.793131 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.293078619 +0000 UTC m=+122.757290375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793190 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c54200-b864-4aee-abb7-f486e4bd3236-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793221 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-metrics-certs\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793245 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-stats-auth\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793266 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmg2n\" (UniqueName: \"kubernetes.io/projected/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-kube-api-access-lmg2n\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793288 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f177940-05e7-4aec-a952-b0eafdc0d9c2-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793910 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c54200-b864-4aee-abb7-f486e4bd3236-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793328 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjxl\" (UniqueName: \"kubernetes.io/projected/f1477647-214b-4ac9-9c1c-bafb5b506eb3-kube-api-access-4mjxl\") pod \"migrator-866fcbc849-qfd7p\" (UID: \"f1477647-214b-4ac9-9c1c-bafb5b506eb3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.793985 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2fl\" (UniqueName: \"kubernetes.io/projected/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-kube-api-access-7p2fl\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794008 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794052 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794073 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed4930f4-4d37-415e-a712-9574322f6ccc-metrics-tls\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794250 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95823ee2-7080-4b23-87d9-e69d42ab1787-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794275 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794353 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794373 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-cabundle\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794702 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794758 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gh82\" (UniqueName: \"kubernetes.io/projected/b759840e-6855-49b1-b4e5-51143b00c6eb-kube-api-access-9gh82\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794779 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794872 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4930f4-4d37-415e-a712-9574322f6ccc-config-volume\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794893 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ed202e-423f-478d-882c-f49ad62a8660-serving-cert\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.795506 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4930f4-4d37-415e-a712-9574322f6ccc-config-volume\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.794909 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.795879 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.795915 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-srv-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.795940 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-csi-data-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.795961 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wh8c\" (UniqueName: \"kubernetes.io/projected/64b5e43d-0337-46b2-b4be-93dbb15ef982-kube-api-access-2wh8c\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796000 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796021 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed4930f4-4d37-415e-a712-9574322f6ccc-tmp-dir\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796036 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796064 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-serving-cert\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796084 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796093 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796152 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-certs\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796281 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64b5e43d-0337-46b2-b4be-93dbb15ef982-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796303 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92lhf\" (UniqueName: \"kubernetes.io/projected/0df42631-1a1c-4104-9f9d-9e197f7cbe33-kube-api-access-92lhf\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796434 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33e2a61-8e55-4309-be5e-82581b191636-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796511 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krmrc\" (UniqueName: \"kubernetes.io/projected/ed4930f4-4d37-415e-a712-9574322f6ccc-kube-api-access-krmrc\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796555 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed4930f4-4d37-415e-a712-9574322f6ccc-tmp-dir\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796588 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fkp\" (UniqueName: \"kubernetes.io/projected/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-kube-api-access-l9fkp\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796641 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-csi-data-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796693 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796805 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796833 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796862 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-tmpfs\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796885 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.796939 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-srv-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797011 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smcql\" (UniqueName: \"kubernetes.io/projected/98ed202e-423f-478d-882c-f49ad62a8660-kube-api-access-smcql\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797072 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.797117 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.297081345 +0000 UTC m=+122.761293201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797167 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797212 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-db7x6\" (UniqueName: \"kubernetes.io/projected/957f59ba-d9a7-424b-94bb-8899126450ed-kube-api-access-db7x6\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797252 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rcq\" (UniqueName: \"kubernetes.io/projected/95823ee2-7080-4b23-87d9-e69d42ab1787-kube-api-access-b4rcq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797277 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797304 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-images\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797282 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/980cfea9-194c-4650-9dee-7ede187c365f-cert\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797327 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33e2a61-8e55-4309-be5e-82581b191636-config\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797433 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lr4\" (UniqueName: \"kubernetes.io/projected/c33e2a61-8e55-4309-be5e-82581b191636-kube-api-access-k4lr4\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797487 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797525 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797606 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c54200-b864-4aee-abb7-f486e4bd3236-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797649 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797721 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rzndp\" (UniqueName: \"kubernetes.io/projected/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-kube-api-access-rzndp\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797802 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c54200-b864-4aee-abb7-f486e4bd3236-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797869 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-key\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797919 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5x7nm\" (UniqueName: \"kubernetes.io/projected/5f177940-05e7-4aec-a952-b0eafdc0d9c2-kube-api-access-5x7nm\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797989 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-node-bootstrap-token\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798053 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-registration-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798089 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-plugins-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798260 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8w8\" (UniqueName: \"kubernetes.io/projected/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-kube-api-access-tl8w8\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798359 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798434 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkd22\" (UniqueName: \"kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798484 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798568 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798639 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798671 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798786 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c54200-b864-4aee-abb7-f486e4bd3236-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798859 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dh89c\" (UniqueName: \"kubernetes.io/projected/980cfea9-194c-4650-9dee-7ede187c365f-kube-api-access-dh89c\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798878 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.798910 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-tmpfs\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799012 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-mountpoint-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799077 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6cq\" (UniqueName: \"kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799209 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-webhook-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799232 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c54200-b864-4aee-abb7-f486e4bd3236-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799286 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-tmpfs\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.797928 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33e2a61-8e55-4309-be5e-82581b191636-config\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799447 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ed202e-423f-478d-882c-f49ad62a8660-serving-cert\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799567 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f177940-05e7-4aec-a952-b0eafdc0d9c2-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799591 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-mountpoint-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799694 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799739 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64b5e43d-0337-46b2-b4be-93dbb15ef982-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.799961 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-registration-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800043 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-plugins-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800271 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f177940-05e7-4aec-a952-b0eafdc0d9c2-images\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800323 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-socket-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800341 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ed202e-423f-478d-882c-f49ad62a8660-config\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800397 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-tmpfs\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800905 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ed202e-423f-478d-882c-f49ad62a8660-config\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.800969 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/957f59ba-d9a7-424b-94bb-8899126450ed-socket-dir\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.801472 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-tmpfs\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.801485 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-cabundle\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.801729 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-tmpfs\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.802244 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-certs\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.802643 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed4930f4-4d37-415e-a712-9574322f6ccc-metrics-tls\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.802872 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b759840e-6855-49b1-b4e5-51143b00c6eb-node-bootstrap-token\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.803252 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.803879 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.804070 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95823ee2-7080-4b23-87d9-e69d42ab1787-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.804348 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c33e2a61-8e55-4309-be5e-82581b191636-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.805030 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-webhook-cert\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.805838 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.806312 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0df42631-1a1c-4104-9f9d-9e197f7cbe33-signing-key\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.809752 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c54200-b864-4aee-abb7-f486e4bd3236-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.810418 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-srv-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.825004 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.830512 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.845604 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.865320 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.875416 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q77c\" (UniqueName: \"kubernetes.io/projected/505ad756-8433-456f-8a6a-d391d7da9b1c-kube-api-access-7q77c\") pod \"openshift-apiserver-operator-846cbfc458-wqsrz\" (UID: \"505ad756-8433-456f-8a6a-d391d7da9b1c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.884373 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.891428 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mft2\" (UniqueName: \"kubernetes.io/projected/cb5172ad-e8a1-4893-a33f-9e95b26fd720-kube-api-access-4mft2\") pod \"machine-approver-54c688565-ck6ws\" (UID: \"cb5172ad-e8a1-4893-a33f-9e95b26fd720\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.903734 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.904138 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.40406117 +0000 UTC m=+122.868272926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.904328 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:07 crc kubenswrapper[5116]: E1212 16:17:07.904769 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.404760508 +0000 UTC m=+122.868972264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.904975 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.917222 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wtd9\" (UniqueName: \"kubernetes.io/projected/57dbe731-30cc-45f4-b457-346f62af94fa-kube-api-access-2wtd9\") pod \"apiserver-8596bd845d-pq598\" (UID: \"57dbe731-30cc-45f4-b457-346f62af94fa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.945611 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.952506 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj7b8\" (UniqueName: \"kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8\") pod \"controller-manager-65b6cccf98-fsj7q\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.964901 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.976022 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr65s\" (UniqueName: \"kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s\") pod \"route-controller-manager-776cdc94d6-vq6gq\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.985578 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:07 crc kubenswrapper[5116]: I1212 16:17:07.995007 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcm4s\" (UniqueName: \"kubernetes.io/projected/ad073d6b-f522-47b7-a45f-1c4ae18f9a10-kube-api-access-zcm4s\") pod \"ingress-operator-6b9cb4dbcf-5vn44\" (UID: \"ad073d6b-f522-47b7-a45f-1c4ae18f9a10\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.004891 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.005595 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.006296 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.506275476 +0000 UTC m=+122.970487232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.010645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghz4\" (UniqueName: \"kubernetes.io/projected/eff66a0b-6756-4fd2-8fa5-756289614a15-kube-api-access-rghz4\") pod \"console-operator-67c89758df-qh8zt\" (UID: \"eff66a0b-6756-4fd2-8fa5-756289614a15\") " pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.024613 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-h7g5m"] Dec 12 16:17:08 crc kubenswrapper[5116]: W1212 16:17:08.032386 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbed86e5d_77df_45bf_ae08_16b99f150f6d.slice/crio-4d10dbebe79f8a363dfcbb8cd200c43faaf431286be51096ad0fbf85d7a50ef0 WatchSource:0}: Error finding container 4d10dbebe79f8a363dfcbb8cd200c43faaf431286be51096ad0fbf85d7a50ef0: Status 404 returned error can't find the container with id 4d10dbebe79f8a363dfcbb8cd200c43faaf431286be51096ad0fbf85d7a50ef0 Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.040546 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.044970 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.056757 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.109166 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.109663 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.609643034 +0000 UTC m=+123.073854790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.124356 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.126590 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.145688 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.150718 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjsrl\" (UniqueName: \"kubernetes.io/projected/b68bd1cf-aa0c-43e2-a771-11c6c91d19dc-kube-api-access-xjsrl\") pod \"console-64d44f6ddf-qhrd4\" (UID: \"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc\") " pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.154895 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc48s\" (UniqueName: \"kubernetes.io/projected/24053646-aeb7-426b-8065-63075e9aa0c8-kube-api-access-mc48s\") pod \"downloads-747b44746d-g5nbl\" (UID: \"24053646-aeb7-426b-8065-63075e9aa0c8\") " pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.164407 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.171965 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.184628 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.193433 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-encryption-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.205197 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.209952 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.210126 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.710087032 +0000 UTC m=+123.174298788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.210395 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.210803 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.710782451 +0000 UTC m=+123.174994207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.215459 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-serving-cert\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.224643 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.232290 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-image-import-ca\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.244790 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.249834 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-audit\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.264791 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.270653 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-images\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.285330 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.296621 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d377873-6680-42c5-afb1-52f63ffff4a4-etcd-client\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.305464 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.312156 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.312274 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.812247826 +0000 UTC m=+123.276459582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.312759 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-config\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.313192 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.313728 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.813703055 +0000 UTC m=+123.277914821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.324854 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.334577 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.359703 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccbbz\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.365449 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.374937 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5297fd-58f7-4678-94d1-6afb8b1639cf-serving-cert\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.385571 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.393609 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-config\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.411684 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.415448 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d377873-6680-42c5-afb1-52f63ffff4a4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.416784 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.417229 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.917207626 +0000 UTC m=+123.381419382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.417633 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.418699 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:08.918678586 +0000 UTC m=+123.382890342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.425588 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.438915 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7c8\" (UniqueName: \"kubernetes.io/projected/4a88744b-ced0-4609-bede-f65d27510b47-kube-api-access-gf7c8\") pod \"dns-operator-799b87ffcd-k9w8q\" (UID: \"4a88744b-ced0-4609-bede-f65d27510b47\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.445026 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.453868 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7xx9\" (UniqueName: \"kubernetes.io/projected/d2e8310b-4d7c-4c19-82af-587b427fc159-kube-api-access-h7xx9\") pod \"openshift-controller-manager-operator-686468bdd5-gt5s5\" (UID: \"d2e8310b-4d7c-4c19-82af-587b427fc159\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.465583 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.474286 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.486691 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.498032 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2vpr\" (UniqueName: \"kubernetes.io/projected/1da6019f-ecaf-43cc-8df2-cddce4345203-kube-api-access-g2vpr\") pod \"cluster-samples-operator-6b564684c8-qrb8l\" (UID: \"1da6019f-ecaf-43cc-8df2-cddce4345203\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.506072 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.518195 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm54d\" (UniqueName: \"kubernetes.io/projected/572c6180-44e5-4299-afe5-a5483f6e0711-kube-api-access-vm54d\") pod \"etcd-operator-69b85846b6-ng2wp\" (UID: \"572c6180-44e5-4299-afe5-a5483f6e0711\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.519321 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.519727 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.01971162 +0000 UTC m=+123.483923376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.525369 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.538620 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-metrics-certs\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.564784 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.581252 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-stats-auth\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.610932 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjxl\" (UniqueName: \"kubernetes.io/projected/f1477647-214b-4ac9-9c1c-bafb5b506eb3-kube-api-access-4mjxl\") pod \"migrator-866fcbc849-qfd7p\" (UID: \"f1477647-214b-4ac9-9c1c-bafb5b506eb3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.621082 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.621686 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.121666279 +0000 UTC m=+123.585878035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.625290 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.629989 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.632138 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.637047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.638913 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" event={"ID":"bed86e5d-77df-45bf-ae08-16b99f150f6d","Type":"ContainerStarted","Data":"61315f2e59a11b672d90c5640adf9f01fed59ad576b0dec3927d77abdf18ff3e"} Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.638957 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" event={"ID":"bed86e5d-77df-45bf-ae08-16b99f150f6d","Type":"ContainerStarted","Data":"4d10dbebe79f8a363dfcbb8cd200c43faaf431286be51096ad0fbf85d7a50ef0"} Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.679778 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gh82\" (UniqueName: \"kubernetes.io/projected/b759840e-6855-49b1-b4e5-51143b00c6eb-kube-api-access-9gh82\") pod \"machine-config-server-p4pvw\" (UID: \"b759840e-6855-49b1-b4e5-51143b00c6eb\") " pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.679877 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wh8c\" (UniqueName: \"kubernetes.io/projected/64b5e43d-0337-46b2-b4be-93dbb15ef982-kube-api-access-2wh8c\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.699247 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp"] Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.704978 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.708763 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92lhf\" (UniqueName: \"kubernetes.io/projected/0df42631-1a1c-4104-9f9d-9e197f7cbe33-kube-api-access-92lhf\") pod \"service-ca-74545575db-dgpvm\" (UID: \"0df42631-1a1c-4104-9f9d-9e197f7cbe33\") " pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.710255 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-serving-cert\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:08 crc kubenswrapper[5116]: W1212 16:17:08.710430 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8a3e58_1eef_468b_84a4_5b3071698628.slice/crio-27a6aaa1b24f6174284dbf453cad8027320edcaf4d6a6f9fcbcbf3fbf5df81e4 WatchSource:0}: Error finding container 27a6aaa1b24f6174284dbf453cad8027320edcaf4d6a6f9fcbcbf3fbf5df81e4: Status 404 returned error can't find the container with id 27a6aaa1b24f6174284dbf453cad8027320edcaf4d6a6f9fcbcbf3fbf5df81e4 Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.722215 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.722775 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.222734205 +0000 UTC m=+123.686945971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.723378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.723830 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.223803484 +0000 UTC m=+123.688015450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.724559 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.730023 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-srv-cert\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.738196 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dgpvm" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.764218 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-db7x6\" (UniqueName: \"kubernetes.io/projected/957f59ba-d9a7-424b-94bb-8899126450ed-kube-api-access-db7x6\") pod \"csi-hostpathplugin-648v2\" (UID: \"957f59ba-d9a7-424b-94bb-8899126450ed\") " pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.800149 5116 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.801171 5116 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.801279 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate podName:3df802e1-3f15-4f5d-ae4e-514d50ff8bde nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.301256695 +0000 UTC m=+123.765468451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate") pod "router-default-68cf44c8b8-sd7g8" (UID: "3df802e1-3f15-4f5d-ae4e-514d50ff8bde") : failed to sync secret cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.801589 5116 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.801628 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls podName:64b5e43d-0337-46b2-b4be-93dbb15ef982 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.301617845 +0000 UTC m=+123.765829601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls") pod "machine-config-controller-f9cdd68f7-xztm9" (UID: "64b5e43d-0337-46b2-b4be-93dbb15ef982") : failed to sync secret cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.802000 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle podName:3df802e1-3f15-4f5d-ae4e-514d50ff8bde nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.301979044 +0000 UTC m=+123.766190800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle") pod "router-default-68cf44c8b8-sd7g8" (UID: "3df802e1-3f15-4f5d-ae4e-514d50ff8bde") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.802481 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lr4\" (UniqueName: \"kubernetes.io/projected/c33e2a61-8e55-4309-be5e-82581b191636-kube-api-access-k4lr4\") pod \"kube-storage-version-migrator-operator-565b79b866-g2k8v\" (UID: \"c33e2a61-8e55-4309-be5e-82581b191636\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.802944 5116 request.go:752] "Waited before sending request" delay="1.003380768s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.803854 5116 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.803893 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config podName:8d5bc6a0-fc54-4b74-bd8f-801b601f096d nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.303883185 +0000 UTC m=+123.768094941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config") pod "kube-apiserver-operator-575994946d-sh26n" (UID: "8d5bc6a0-fc54-4b74-bd8f-801b601f096d") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.810789 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.813444 5116 projected.go:289] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.813479 5116 projected.go:194] Error preparing data for projected volume kube-api-access-bghjf for pod openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.813545 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf podName:753e4cbf-dd62-4448-ab39-6f28a23c7ca2 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.313528224 +0000 UTC m=+123.777739980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bghjf" (UniqueName: "kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf") pod "authentication-operator-7f5c659b84-xzscf" (UID: "753e4cbf-dd62-4448-ab39-6f28a23c7ca2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.819988 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.825406 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.826169 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.326145603 +0000 UTC m=+123.790357369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.854097 5116 projected.go:289] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.854472 5116 projected.go:194] Error preparing data for projected volume kube-api-access-8gwc5 for pod openshift-authentication/oauth-openshift-66458b6674-lw784: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.854588 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5 podName:41bfba7f-9125-4770-99ea-3b72ddc0173b nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.354557377 +0000 UTC m=+123.818769133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8gwc5" (UniqueName: "kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5") pod "oauth-openshift-66458b6674-lw784" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.860886 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmrc\" (UniqueName: \"kubernetes.io/projected/ed4930f4-4d37-415e-a712-9574322f6ccc-kube-api-access-krmrc\") pod \"dns-default-97glr\" (UID: \"ed4930f4-4d37-415e-a712-9574322f6ccc\") " pod="openshift-dns/dns-default-97glr" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.868802 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.872787 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-p4pvw" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.885366 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.885507 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.885871 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.893489 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-648v2" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.922230 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x7nm\" (UniqueName: \"kubernetes.io/projected/5f177940-05e7-4aec-a952-b0eafdc0d9c2-kube-api-access-5x7nm\") pod \"machine-config-operator-67c9d58cbb-jwbms\" (UID: \"5f177940-05e7-4aec-a952-b0eafdc0d9c2\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.927798 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:08 crc kubenswrapper[5116]: E1212 16:17:08.928324 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.428274797 +0000 UTC m=+123.892486553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.948428 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c54200-b864-4aee-abb7-f486e4bd3236-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2p7ng\" (UID: \"26c54200-b864-4aee-abb7-f486e4bd3236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.971747 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dgpvm"] Dec 12 16:17:08 crc kubenswrapper[5116]: W1212 16:17:08.981948 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df42631_1a1c_4104_9f9d_9e197f7cbe33.slice/crio-022d33e1c947c4c4acbcf7f3cfa29f58ad430d62a666ca06d507b3eabd542ff1 WatchSource:0}: Error finding container 022d33e1c947c4c4acbcf7f3cfa29f58ad430d62a666ca06d507b3eabd542ff1: Status 404 returned error can't find the container with id 022d33e1c947c4c4acbcf7f3cfa29f58ad430d62a666ca06d507b3eabd542ff1 Dec 12 16:17:08 crc kubenswrapper[5116]: I1212 16:17:08.989182 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smcql\" (UniqueName: \"kubernetes.io/projected/98ed202e-423f-478d-882c-f49ad62a8660-kube-api-access-smcql\") pod \"service-ca-operator-5b9c976747-dlgtp\" (UID: \"98ed202e-423f-478d-882c-f49ad62a8660\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.008823 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh89c\" (UniqueName: \"kubernetes.io/projected/980cfea9-194c-4650-9dee-7ede187c365f-kube-api-access-dh89c\") pod \"ingress-canary-4v8b9\" (UID: \"980cfea9-194c-4650-9dee-7ede187c365f\") " pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.030288 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6cq\" (UniqueName: \"kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq\") pod \"cni-sysctl-allowlist-ds-tbppz\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.030948 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.031022 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.031419 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.032293 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.032666 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p"] Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.032729 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.532707623 +0000 UTC m=+123.996919379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.033558 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.034084 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.53406855 +0000 UTC m=+123.998280306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.065340 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.065454 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.077135 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v"] Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.077139 5116 projected.go:289] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.082949 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn\") pod \"marketplace-operator-547dbd544d-lkvbc\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.085490 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.092654 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.095291 5116 projected.go:289] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.106710 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.117758 5116 projected.go:289] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.125414 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.127682 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.134998 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.135154 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.135390 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.135510 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.135627 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.635607967 +0000 UTC m=+124.099819723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.135965 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.136397 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.636389549 +0000 UTC m=+124.100601305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.140256 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-97glr" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.142215 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz"] Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.145577 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.154698 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.163275 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4v8b9" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.165851 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.170474 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.170532 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.204160 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.204239 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.207687 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.224319 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console/downloads-747b44746d-g5nbl" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.224503 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.224586 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.228814 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr"] Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.237404 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.237574 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.737548656 +0000 UTC m=+124.201760402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.238395 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.238714 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.738707968 +0000 UTC m=+124.202919724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.246209 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.252569 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.252717 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.266281 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.286483 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.287081 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console-operator/console-operator-67c89758df-qh8zt" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.287197 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:09 crc kubenswrapper[5116]: W1212 16:17:09.295073 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505ad756_8433_456f_8a6a_d391d7da9b1c.slice/crio-280e9c25f3b97a260978ca63a307e7e58e2398138bb3d344d45e6fe72b99d609 WatchSource:0}: Error finding container 280e9c25f3b97a260978ca63a307e7e58e2398138bb3d344d45e6fe72b99d609: Status 404 returned error can't find the container with id 280e9c25f3b97a260978ca63a307e7e58e2398138bb3d344d45e6fe72b99d609 Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.306477 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.308229 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-648v2"] Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.325348 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.339352 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.339741 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.339788 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.339852 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.839823634 +0000 UTC m=+124.304035580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.340006 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bghjf\" (UniqueName: \"kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.340259 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.340332 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.340445 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.340939 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-service-ca-bundle\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.341329 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-config\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.341433 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.841411417 +0000 UTC m=+124.305623383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.345710 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-default-certificate\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.346839 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghjf\" (UniqueName: \"kubernetes.io/projected/753e4cbf-dd62-4448-ab39-6f28a23c7ca2-kube-api-access-bghjf\") pod \"authentication-operator-7f5c659b84-xzscf\" (UID: \"753e4cbf-dd62-4448-ab39-6f28a23c7ca2\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.346979 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64b5e43d-0337-46b2-b4be-93dbb15ef982-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xztm9\" (UID: \"64b5e43d-0337-46b2-b4be-93dbb15ef982\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.347765 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.367583 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.373698 5116 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console/console-64d44f6ddf-qhrd4" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.373865 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.385706 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.405220 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.425573 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 16:17:09 crc kubenswrapper[5116]: W1212 16:17:09.429487 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3687a9b9_879b_47e3_bc75_6a382ac0febe.slice/crio-3681baf2276e85cb25a037562978d2f5efdde295970d2d50c559d17be5687927 WatchSource:0}: Error finding container 3681baf2276e85cb25a037562978d2f5efdde295970d2d50c559d17be5687927: Status 404 returned error can't find the container with id 3681baf2276e85cb25a037562978d2f5efdde295970d2d50c559d17be5687927 Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.443466 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.443735 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.444612 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:09.944571609 +0000 UTC m=+124.408783545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.447594 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.449612 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") pod \"oauth-openshift-66458b6674-lw784\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.465645 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.471778 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.484971 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.494497 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.505794 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.512849 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.526600 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.533851 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.544092 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp"] Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.545895 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.546438 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.046409465 +0000 UTC m=+124.510621221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.547957 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.556134 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44"] Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.583221 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.591675 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.596955 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d5bc6a0-fc54-4b74-bd8f-801b601f096d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sh26n\" (UID: \"8d5bc6a0-fc54-4b74-bd8f-801b601f096d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.597708 5116 projected.go:194] Error preparing data for projected volume kube-api-access-tgztn for pod openshift-apiserver/apiserver-9ddfb9f55-svwnw: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.597813 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn podName:8d377873-6680-42c5-afb1-52f63ffff4a4 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.097788975 +0000 UTC m=+124.562000731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tgztn" (UniqueName: "kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn") pod "apiserver-9ddfb9f55-svwnw" (UID: "8d377873-6680-42c5-afb1-52f63ffff4a4") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.605874 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.616093 5116 projected.go:194] Error preparing data for projected volume kube-api-access-2p88j for pod openshift-config-operator/openshift-config-operator-5777786469-24z4m: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.616240 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j podName:0f5297fd-58f7-4678-94d1-6afb8b1639cf nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.11621091 +0000 UTC m=+124.580422666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2p88j" (UniqueName: "kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j") pod "openshift-config-operator-5777786469-24z4m" (UID: "0f5297fd-58f7-4678-94d1-6afb8b1639cf") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.624481 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.628485 5116 projected.go:194] Error preparing data for projected volume kube-api-access-bbgx5 for pod openshift-machine-api/machine-api-operator-755bb95488-87slr: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.628561 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5 podName:dd889f29-959e-4c5f-b7d0-44e2ef38dc22 nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.128539481 +0000 UTC m=+124.592751237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bbgx5" (UniqueName: "kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5") pod "machine-api-operator-755bb95488-87slr" (UID: "dd889f29-959e-4c5f-b7d0-44e2ef38dc22") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.632271 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rcq\" (UniqueName: \"kubernetes.io/projected/95823ee2-7080-4b23-87d9-e69d42ab1787-kube-api-access-b4rcq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k8t8q\" (UID: \"95823ee2-7080-4b23-87d9-e69d42ab1787\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.646889 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.647451 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.147417159 +0000 UTC m=+124.611628935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.648268 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.648558 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.148543589 +0000 UTC m=+124.612755345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.654246 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" event={"ID":"505ad756-8433-456f-8a6a-d391d7da9b1c","Type":"ContainerStarted","Data":"280e9c25f3b97a260978ca63a307e7e58e2398138bb3d344d45e6fe72b99d609"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.665622 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.675807 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.676844 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dgpvm" event={"ID":"0df42631-1a1c-4104-9f9d-9e197f7cbe33","Type":"ContainerStarted","Data":"7b745d388a05b3e847e0c2386aaeae0a229a60d94cdc63d394b6e5c02a495468"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.676892 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dgpvm" event={"ID":"0df42631-1a1c-4104-9f9d-9e197f7cbe33","Type":"ContainerStarted","Data":"022d33e1c947c4c4acbcf7f3cfa29f58ad430d62a666ca06d507b3eabd542ff1"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.681579 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" event={"ID":"bed86e5d-77df-45bf-ae08-16b99f150f6d","Type":"ContainerStarted","Data":"300edb480b9a027854d2cb544128ae8f224c9b9aeeb7854f6959041e4eba1191"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.684033 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" event={"ID":"f1477647-214b-4ac9-9c1c-bafb5b506eb3","Type":"ContainerStarted","Data":"e59646c75b7211aacf2c3d4e9c8fcc7d5a1bb651d265c3d73e973c02bd5bb300"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.685034 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.687323 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-648v2" event={"ID":"957f59ba-d9a7-424b-94bb-8899126450ed","Type":"ContainerStarted","Data":"edabde7be2b93f65c1bf3099bad49194b247d5f2c0ecff3493a0cfe3b4d4aebf"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.694435 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" event={"ID":"3687a9b9-879b-47e3-bc75-6a382ac0febe","Type":"ContainerStarted","Data":"3681baf2276e85cb25a037562978d2f5efdde295970d2d50c559d17be5687927"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.697647 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2fl\" (UniqueName: \"kubernetes.io/projected/e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3-kube-api-access-7p2fl\") pod \"package-server-manager-77f986bd66-9th2f\" (UID: \"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.699360 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" event={"ID":"0315e170-e93e-4945-89e2-3e5e56e0d317","Type":"ContainerStarted","Data":"a01f514271e116fc4921b6d0a04b0225070f3a3d074dfcafeef6162aa5972ca6"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.699515 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzndp\" (UniqueName: \"kubernetes.io/projected/2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e-kube-api-access-rzndp\") pod \"olm-operator-5cdf44d969-rtljx\" (UID: \"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.699949 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkd22\" (UniqueName: \"kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22\") pod \"collect-profiles-29425935-jlznq\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.701243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmg2n\" (UniqueName: \"kubernetes.io/projected/d48aaed9-8c63-4ef9-823b-1c58fadbcc17-kube-api-access-lmg2n\") pod \"catalog-operator-75ff9f647d-mbmd7\" (UID: \"d48aaed9-8c63-4ef9-823b-1c58fadbcc17\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.704654 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fkp\" (UniqueName: \"kubernetes.io/projected/6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6-kube-api-access-l9fkp\") pod \"packageserver-7d4fc7d867-6fnz8\" (UID: \"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.705327 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.711564 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-p4pvw" event={"ID":"b759840e-6855-49b1-b4e5-51143b00c6eb","Type":"ContainerStarted","Data":"3035d4b2fe379ade56ea872fcf2e0591b4f5dbba1a59d0de71f4c06146ee6517"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.711827 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-p4pvw" event={"ID":"b759840e-6855-49b1-b4e5-51143b00c6eb","Type":"ContainerStarted","Data":"ca6e7400409c8c1b589c292e8355c5f4b34f77c3ebb4530f752a4c769858cc57"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.713792 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8w8\" (UniqueName: \"kubernetes.io/projected/3df802e1-3f15-4f5d-ae4e-514d50ff8bde-kube-api-access-tl8w8\") pod \"router-default-68cf44c8b8-sd7g8\" (UID: \"3df802e1-3f15-4f5d-ae4e-514d50ff8bde\") " pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.714773 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" event={"ID":"c33e2a61-8e55-4309-be5e-82581b191636","Type":"ContainerStarted","Data":"c30cb67cf5da4828bb79cd902e9afbc4244fbcbac12639a53afcff1674c7294f"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.721742 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" event={"ID":"6a8a3e58-1eef-468b-84a4-5b3071698628","Type":"ContainerStarted","Data":"4bad9ef07693f5cb11926b4aea379b83f8d2b2ac8b1b7e85ff45790386b6966c"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.721981 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" event={"ID":"6a8a3e58-1eef-468b-84a4-5b3071698628","Type":"ContainerStarted","Data":"27a6aaa1b24f6174284dbf453cad8027320edcaf4d6a6f9fcbcbf3fbf5df81e4"} Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.729831 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.730337 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.759219 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.777998 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.782243 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" event={"ID":"cb5172ad-e8a1-4893-a33f-9e95b26fd720","Type":"ContainerStarted","Data":"a6706c7bd7b91acbe33cb15b5d7f1d99e0fd7c6295302996f50b114982e591b8"} Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.783615 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.283588107 +0000 UTC m=+124.747799863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.787575 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.797878 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.800904 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.862725 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.863100 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.363082563 +0000 UTC m=+124.827294319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.900758 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.905358 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.907635 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.922231 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.925950 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.934517 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.964218 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.964501 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.464456056 +0000 UTC m=+124.928667812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.965040 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.965286 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 16:17:09 crc kubenswrapper[5116]: E1212 16:17:09.965469 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.465447853 +0000 UTC m=+124.929659609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.975611 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" Dec 12 16:17:09 crc kubenswrapper[5116]: I1212 16:17:09.984460 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.001837 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.067770 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.067981 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.567934966 +0000 UTC m=+125.032146732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.068821 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.069315 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.569305763 +0000 UTC m=+125.033517519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.099863 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-97glr"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.106912 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pq598"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.116364 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.122329 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.129615 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4v8b9"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.133500 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qh8zt"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.170808 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.171203 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgztn\" (UniqueName: \"kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.171236 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2p88j\" (UniqueName: \"kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.171258 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgx5\" (UniqueName: \"kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.173393 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.67337245 +0000 UTC m=+125.137584216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.209148 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgztn\" (UniqueName: \"kubernetes.io/projected/8d377873-6680-42c5-afb1-52f63ffff4a4-kube-api-access-tgztn\") pod \"apiserver-9ddfb9f55-svwnw\" (UID: \"8d377873-6680-42c5-afb1-52f63ffff4a4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.211657 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgx5\" (UniqueName: \"kubernetes.io/projected/dd889f29-959e-4c5f-b7d0-44e2ef38dc22-kube-api-access-bbgx5\") pod \"machine-api-operator-755bb95488-87slr\" (UID: \"dd889f29-959e-4c5f-b7d0-44e2ef38dc22\") " pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.212705 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p88j\" (UniqueName: \"kubernetes.io/projected/0f5297fd-58f7-4678-94d1-6afb8b1639cf-kube-api-access-2p88j\") pod \"openshift-config-operator-5777786469-24z4m\" (UID: \"0f5297fd-58f7-4678-94d1-6afb8b1639cf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.272981 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.273330 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.773313724 +0000 UTC m=+125.237525480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.325529 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.327582 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.337482 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-g5nbl"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.355138 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.368728 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qhrd4"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.371017 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.371944 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.373996 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.374219 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.874193835 +0000 UTC m=+125.338405591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.374335 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.374666 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.874650887 +0000 UTC m=+125.338862643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.386225 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.391507 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.424810 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.428575 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.475473 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.476049 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:10.976029301 +0000 UTC m=+125.440241057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.567205 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-h7g5m" podStartSLOduration=104.56718868 podStartE2EDuration="1m44.56718868s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:10.566538592 +0000 UTC m=+125.030750348" watchObservedRunningTime="2025-12-12 16:17:10.56718868 +0000 UTC m=+125.031400436" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.591693 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.592316 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.092287235 +0000 UTC m=+125.556499001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.692899 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.693376 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.193340759 +0000 UTC m=+125.657552525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.773578 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-dgpvm" podStartSLOduration=103.773561744 podStartE2EDuration="1m43.773561744s" podCreationTimestamp="2025-12-12 16:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:10.771189051 +0000 UTC m=+125.235400807" watchObservedRunningTime="2025-12-12 16:17:10.773561744 +0000 UTC m=+125.237773500" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.796247 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.802637 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.302621655 +0000 UTC m=+125.766833411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.835279 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" event={"ID":"e21b028a-7b09-4b86-9712-63820ff56d55","Type":"ContainerStarted","Data":"67909c6621a4cb8ebc6f61418eeb01fb8ee82ae4e727cc13e431639d07541126"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.889372 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" event={"ID":"26c54200-b864-4aee-abb7-f486e4bd3236","Type":"ContainerStarted","Data":"f88a15ef0cc517b51f1928780d812178328672760a1825d582eaa6133db256f0"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.905001 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:10 crc kubenswrapper[5116]: E1212 16:17:10.905393 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.405375896 +0000 UTC m=+125.869587652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.914228 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qhrd4" event={"ID":"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc","Type":"ContainerStarted","Data":"ed2010b82d99378b7aa28cbcbf57e142fd0a202d9be3ecf2a92a9293b992552e"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.920554 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" event={"ID":"cb5172ad-e8a1-4893-a33f-9e95b26fd720","Type":"ContainerStarted","Data":"5f5e72aaf0746f1145952bec4e060b2c7891fbdf6593752268ebdb1f88cdefee"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.927160 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" event={"ID":"505ad756-8433-456f-8a6a-d391d7da9b1c","Type":"ContainerStarted","Data":"a473e2a66c03b1906c17a289978d55a9191afcbd92abf37e0f708f71cc2146a4"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.951858 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" event={"ID":"57dbe731-30cc-45f4-b457-346f62af94fa","Type":"ContainerStarted","Data":"010333041b46f9ef3e997fb6ac20f23f1ab8efa73f35581707b45c77ce131f0b"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.963805 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4v8b9" event={"ID":"980cfea9-194c-4650-9dee-7ede187c365f","Type":"ContainerStarted","Data":"e41f2b26f1418e9379541f2a45f0ede8b08a978c44850d0770ebabe1e70ecfad"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.967429 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" event={"ID":"f1477647-214b-4ac9-9c1c-bafb5b506eb3","Type":"ContainerStarted","Data":"fda46c3b65a05d54e1c384027d93e0910a5cb59a82f1ed0c2e878d6feaf35311"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.967471 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" event={"ID":"f1477647-214b-4ac9-9c1c-bafb5b506eb3","Type":"ContainerStarted","Data":"a43e2ec6ec96401cf36c38b0f25361d3ae607eb90e58d691834d03a11a988a19"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.981862 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-p4pvw" podStartSLOduration=6.98183836 podStartE2EDuration="6.98183836s" podCreationTimestamp="2025-12-12 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:10.979681652 +0000 UTC m=+125.443893418" watchObservedRunningTime="2025-12-12 16:17:10.98183836 +0000 UTC m=+125.446050116" Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.986842 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" event={"ID":"ad073d6b-f522-47b7-a45f-1c4ae18f9a10","Type":"ContainerStarted","Data":"919a2c9c4e49847b081887f76f00131f032af333127cc477bbe1c39711e7f95b"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.987052 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" event={"ID":"ad073d6b-f522-47b7-a45f-1c4ae18f9a10","Type":"ContainerStarted","Data":"2342393287ded0e7813edcc7a6ebc6087d74c19bc64cd22488005dd24c8b0b84"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.995315 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" event={"ID":"98ed202e-423f-478d-882c-f49ad62a8660","Type":"ContainerStarted","Data":"2dd507bcaa722ab8351ec3cbfd422962d470af76806153924a3da59fa484943d"} Dec 12 16:17:10 crc kubenswrapper[5116]: I1212 16:17:10.995470 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" event={"ID":"98ed202e-423f-478d-882c-f49ad62a8660","Type":"ContainerStarted","Data":"494159f5708f17bb5bf99f714dc53cf28f362dff17fab1170a4b73d0a1887186"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.001030 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" event={"ID":"3df802e1-3f15-4f5d-ae4e-514d50ff8bde","Type":"ContainerStarted","Data":"06cbfdb328c1734452335cc721b3682d64060ab068af2a1ab330dfa3707ca965"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.007171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.007586 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.507572621 +0000 UTC m=+125.971784377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.017075 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-97glr" event={"ID":"ed4930f4-4d37-415e-a712-9574322f6ccc","Type":"ContainerStarted","Data":"18ef797026676dbae0ce0c482b60ee7f19df944679f494f8602671343c8d978f"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.069669 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" event={"ID":"c33e2a61-8e55-4309-be5e-82581b191636","Type":"ContainerStarted","Data":"b5abc26df2030a3b242cf5c998a24dc2e46e503cd9b2b06f603c79548de765b0"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.077365 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" event={"ID":"5f177940-05e7-4aec-a952-b0eafdc0d9c2","Type":"ContainerStarted","Data":"e5b8203a4e0be52f7ce2dc4dbd3f40c455ec743c46a799fcdc028fa54dba7893"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.081154 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" event={"ID":"b9c44a8b-640d-4806-a985-d12ada8b88dd","Type":"ContainerStarted","Data":"5d138f01bf35f517d59623aebb7a7bba19ca860bc792cff6aaf09d603ce763d7"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.084580 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" event={"ID":"01eff4cc-010a-4ba2-87a4-2dd5850dab4b","Type":"ContainerStarted","Data":"2400084f7bc8cf4b764e31ef01f74d8ed6a6a8009f9c6c7effdb6192d6db6479"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.086636 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-g5nbl" event={"ID":"24053646-aeb7-426b-8065-63075e9aa0c8","Type":"ContainerStarted","Data":"1ab9622b1a08abe6e76c462ae9012c3bef67b15bb7412048685b2c88f22c83d1"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.088866 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" event={"ID":"eff66a0b-6756-4fd2-8fa5-756289614a15","Type":"ContainerStarted","Data":"8ea8870f819e201913c4ff3391aac667284c3cd3253fcd6a09a9bb5511728c3b"} Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.108249 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.108567 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.608522113 +0000 UTC m=+126.072733869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.109149 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.110682 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.610671592 +0000 UTC m=+126.074883348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.213231 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.224178 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.715141229 +0000 UTC m=+126.179352985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.303358 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-g52qp" podStartSLOduration=105.303335828 podStartE2EDuration="1m45.303335828s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:11.302751822 +0000 UTC m=+125.766963578" watchObservedRunningTime="2025-12-12 16:17:11.303335828 +0000 UTC m=+125.767547584" Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.315228 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.321877 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.821856326 +0000 UTC m=+126.286068082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.377686 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-k9w8q"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.396387 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.399263 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.416618 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.418551 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:11.918527423 +0000 UTC m=+126.382739179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.432265 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.433395 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-qfd7p" podStartSLOduration=105.433377281 podStartE2EDuration="1m45.433377281s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:11.41841827 +0000 UTC m=+125.882630026" watchObservedRunningTime="2025-12-12 16:17:11.433377281 +0000 UTC m=+125.897589037" Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.509916 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dlgtp" podStartSLOduration=104.509866247 podStartE2EDuration="1m44.509866247s" podCreationTimestamp="2025-12-12 16:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:11.505993572 +0000 UTC m=+125.970205338" watchObservedRunningTime="2025-12-12 16:17:11.509866247 +0000 UTC m=+125.974078003" Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.512251 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-g2k8v" podStartSLOduration=105.512239861 podStartE2EDuration="1m45.512239861s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:11.464846487 +0000 UTC m=+125.929058243" watchObservedRunningTime="2025-12-12 16:17:11.512239861 +0000 UTC m=+125.976451617" Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.522602 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.523482 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.023463362 +0000 UTC m=+126.487675118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: W1212 16:17:11.564776 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572c6180_44e5_4299_afe5_a5483f6e0711.slice/crio-4c71e28fbf6607b4638d3972fbdae089d1caeda667e25ae4001485416b2433d7 WatchSource:0}: Error finding container 4c71e28fbf6607b4638d3972fbdae089d1caeda667e25ae4001485416b2433d7: Status 404 returned error can't find the container with id 4c71e28fbf6607b4638d3972fbdae089d1caeda667e25ae4001485416b2433d7 Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.624346 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wqsrz" podStartSLOduration=105.624326092 podStartE2EDuration="1m45.624326092s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:11.554765323 +0000 UTC m=+126.018977119" watchObservedRunningTime="2025-12-12 16:17:11.624326092 +0000 UTC m=+126.088537848" Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.630746 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.631325 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.131284689 +0000 UTC m=+126.595496475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.732219 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.732699 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.232681933 +0000 UTC m=+126.696893699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.838053 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.838378 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.338318491 +0000 UTC m=+126.802530247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.847450 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.848021 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.348001622 +0000 UTC m=+126.812213378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.866325 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lw784"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.890977 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.891028 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.945189 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9"] Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.949054 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:11 crc kubenswrapper[5116]: E1212 16:17:11.949519 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.449502258 +0000 UTC m=+126.913714014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:11 crc kubenswrapper[5116]: I1212 16:17:11.990886 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:11.999159 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.008771 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.015257 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.019245 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-svwnw"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.021927 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.051596 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.052012 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.551995102 +0000 UTC m=+127.016206858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: W1212 16:17:12.059292 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a15c5fa_bcc2_4558_a2cb_82ad217e3f1e.slice/crio-a691b450e8e0416b600fa672dd7efde9942c4ad1c6ce9a8fc4f8a4c17b76aca3 WatchSource:0}: Error finding container a691b450e8e0416b600fa672dd7efde9942c4ad1c6ce9a8fc4f8a4c17b76aca3: Status 404 returned error can't find the container with id a691b450e8e0416b600fa672dd7efde9942c4ad1c6ce9a8fc4f8a4c17b76aca3 Dec 12 16:17:12 crc kubenswrapper[5116]: W1212 16:17:12.113705 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode30a5a66_1aa8_4e8b_8ca1_9796e082f7d3.slice/crio-c6c6adf1d7efb617d2d842e5e7f7fcfc9791ff815fe71743c55f83bea642f7ec WatchSource:0}: Error finding container c6c6adf1d7efb617d2d842e5e7f7fcfc9791ff815fe71743c55f83bea642f7ec: Status 404 returned error can't find the container with id c6c6adf1d7efb617d2d842e5e7f7fcfc9791ff815fe71743c55f83bea642f7ec Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.118138 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-87slr"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.121661 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.155137 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.155441 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.65538295 +0000 UTC m=+127.119594716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.157016 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.157594 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.657576909 +0000 UTC m=+127.121788665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.159708 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-24z4m"] Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.180178 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" event={"ID":"3df802e1-3f15-4f5d-ae4e-514d50ff8bde","Type":"ContainerStarted","Data":"ae73f92e90f1cc269f0d7548d042f4466ea176dcf89d8364304eba57d14f1e71"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.201211 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-97glr" event={"ID":"ed4930f4-4d37-415e-a712-9574322f6ccc","Type":"ContainerStarted","Data":"5b2c5ba88380e4d5a006b7436ff1488fb58751facc629cac73fb5b782bda0a05"} Dec 12 16:17:12 crc kubenswrapper[5116]: W1212 16:17:12.225699 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d5bc6a0_fc54_4b74_bd8f_801b601f096d.slice/crio-70c3bb08f5849beff6be04c4e67642a8f32a68807a614c51ac2ad5ed529a21a1 WatchSource:0}: Error finding container 70c3bb08f5849beff6be04c4e67642a8f32a68807a614c51ac2ad5ed529a21a1: Status 404 returned error can't find the container with id 70c3bb08f5849beff6be04c4e67642a8f32a68807a614c51ac2ad5ed529a21a1 Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.230914 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podStartSLOduration=106.230801866 podStartE2EDuration="1m46.230801866s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.229007638 +0000 UTC m=+126.693219404" watchObservedRunningTime="2025-12-12 16:17:12.230801866 +0000 UTC m=+126.695013622" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.237127 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" event={"ID":"753e4cbf-dd62-4448-ab39-6f28a23c7ca2","Type":"ContainerStarted","Data":"62ea48454983c2cd5b0633653c2c160a1ab8fe4e4e1d0b6ce64c51c1973895c4"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.247124 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" event={"ID":"1da6019f-ecaf-43cc-8df2-cddce4345203","Type":"ContainerStarted","Data":"448087ec006d5f1b187bd07f72a0474434e448fccbbc1f32a1011e3147127cbf"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.260988 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.265918 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.765890009 +0000 UTC m=+127.230101765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.321509 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" event={"ID":"5f177940-05e7-4aec-a952-b0eafdc0d9c2","Type":"ContainerStarted","Data":"0e4afd373af3030d93b67d796b8202467453a86b4c52a93dd3667de221c802c9"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.321566 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" event={"ID":"5f177940-05e7-4aec-a952-b0eafdc0d9c2","Type":"ContainerStarted","Data":"e1b05a45e4d04478a37ee90266d2e1139e824adda67f4e915167d12b76642305"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.324320 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" event={"ID":"4a88744b-ced0-4609-bede-f65d27510b47","Type":"ContainerStarted","Data":"015bba07299eac299f14c9067e81d8ea5226622ff51d02f4d5e41fa809dfcd82"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.365444 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.366275 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.866259046 +0000 UTC m=+127.330470802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.401588 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" event={"ID":"b9c44a8b-640d-4806-a985-d12ada8b88dd","Type":"ContainerStarted","Data":"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.402298 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.421295 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lkvbc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.421400 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.434312 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" event={"ID":"01eff4cc-010a-4ba2-87a4-2dd5850dab4b","Type":"ContainerStarted","Data":"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.435545 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.436639 5116 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fsj7q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.436688 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.445322 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" podStartSLOduration=106.445303209 podStartE2EDuration="1m46.445303209s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.438955959 +0000 UTC m=+126.903167715" watchObservedRunningTime="2025-12-12 16:17:12.445303209 +0000 UTC m=+126.909514965" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.446387 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jwbms" podStartSLOduration=106.446375658 podStartE2EDuration="1m46.446375658s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.352621749 +0000 UTC m=+126.816833505" watchObservedRunningTime="2025-12-12 16:17:12.446375658 +0000 UTC m=+126.910587414" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.448860 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-g5nbl" event={"ID":"24053646-aeb7-426b-8065-63075e9aa0c8","Type":"ContainerStarted","Data":"6f771ea0ff080df4f02d7de8851b263bc4b05bf15ceaa93018238da268fc243b"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.449946 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.465772 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" podStartSLOduration=106.465753779 podStartE2EDuration="1m46.465753779s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.463418006 +0000 UTC m=+126.927629762" watchObservedRunningTime="2025-12-12 16:17:12.465753779 +0000 UTC m=+126.929965535" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.469949 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" event={"ID":"eff66a0b-6756-4fd2-8fa5-756289614a15","Type":"ContainerStarted","Data":"ce86f6f2025002f39f318578c05d773b2cf31a2a060eec55fb1497cfcdf24bfe"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.471325 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.471385 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-g5nbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.471468 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-g5nbl" podUID="24053646-aeb7-426b-8065-63075e9aa0c8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.471734 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.471991 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.971949675 +0000 UTC m=+127.436161431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.472462 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.473933 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:12.973924448 +0000 UTC m=+127.438136204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.492254 5116 patch_prober.go:28] interesting pod/console-operator-67c89758df-qh8zt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.492325 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" podUID="eff66a0b-6756-4fd2-8fa5-756289614a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.492594 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" event={"ID":"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e","Type":"ContainerStarted","Data":"a691b450e8e0416b600fa672dd7efde9942c4ad1c6ce9a8fc4f8a4c17b76aca3"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.509953 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" event={"ID":"d2e8310b-4d7c-4c19-82af-587b427fc159","Type":"ContainerStarted","Data":"b6dc497fea40e47f67b4a643f010df7ec74120b7c9846dc6c9242a4295f3f9da"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.529977 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" event={"ID":"e21b028a-7b09-4b86-9712-63820ff56d55","Type":"ContainerStarted","Data":"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.530048 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.544412 5116 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-vq6gq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.544478 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.546299 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-g5nbl" podStartSLOduration=106.546286942 podStartE2EDuration="1m46.546286942s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.511089856 +0000 UTC m=+126.975301612" watchObservedRunningTime="2025-12-12 16:17:12.546286942 +0000 UTC m=+127.010498698" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.555543 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" event={"ID":"26c54200-b864-4aee-abb7-f486e4bd3236","Type":"ContainerStarted","Data":"6676777ae4d4348c6d76b87a3eb5d34109916af9ae5bd52364c2f8fff36281e7"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.567320 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" event={"ID":"41bfba7f-9125-4770-99ea-3b72ddc0173b","Type":"ContainerStarted","Data":"6db635fef50e171c20119e96417f3465b6df6d09f87f1ff5b6f6eceea4e7e10d"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.575337 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.575965 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.075929458 +0000 UTC m=+127.540141254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.577321 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" event={"ID":"572c6180-44e5-4299-afe5-a5483f6e0711","Type":"ContainerStarted","Data":"4c71e28fbf6607b4638d3972fbdae089d1caeda667e25ae4001485416b2433d7"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.578018 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.579122 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.079095224 +0000 UTC m=+127.543306980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.583099 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" podStartSLOduration=106.583082451 podStartE2EDuration="1m46.583082451s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.58231123 +0000 UTC m=+127.046522986" watchObservedRunningTime="2025-12-12 16:17:12.583082451 +0000 UTC m=+127.047294207" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.583763 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" podStartSLOduration=106.583755439 podStartE2EDuration="1m46.583755439s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.54471265 +0000 UTC m=+127.008924406" watchObservedRunningTime="2025-12-12 16:17:12.583755439 +0000 UTC m=+127.047967195" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.586918 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" event={"ID":"3687a9b9-879b-47e3-bc75-6a382ac0febe","Type":"ContainerStarted","Data":"bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.588154 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.609089 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" event={"ID":"0315e170-e93e-4945-89e2-3e5e56e0d317","Type":"ContainerStarted","Data":"4120ec7bcac60459433cc0e659744da3cf465902da5385ddca527fd7f046f50e"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.627243 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qhrd4" event={"ID":"b68bd1cf-aa0c-43e2-a771-11c6c91d19dc","Type":"ContainerStarted","Data":"cc7cf1fef1e54362f5b802dffd130825d156643e74c6ca98a2d40de07296308c"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.658617 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" event={"ID":"cb5172ad-e8a1-4893-a33f-9e95b26fd720","Type":"ContainerStarted","Data":"4f038f146884bad37fdba45b30db2579bec0c3cec1f08050aa00e00d00ff7c70"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.668200 5116 generic.go:358] "Generic (PLEG): container finished" podID="57dbe731-30cc-45f4-b457-346f62af94fa" containerID="7421c94c4532be4f401a5969e2134467853aa4e2573d8e22665d5f58cf72cb30" exitCode=0 Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.668421 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" event={"ID":"57dbe731-30cc-45f4-b457-346f62af94fa","Type":"ContainerDied","Data":"7421c94c4532be4f401a5969e2134467853aa4e2573d8e22665d5f58cf72cb30"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.676245 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4v8b9" event={"ID":"980cfea9-194c-4650-9dee-7ede187c365f","Type":"ContainerStarted","Data":"676aa9b2b5baab5864d152981c60037def02323f5c8a43e73f918926aaffff61"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.681538 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.682967 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.182945014 +0000 UTC m=+127.647156770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.708536 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2p7ng" podStartSLOduration=106.70851059 podStartE2EDuration="1m46.70851059s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.656544994 +0000 UTC m=+127.120756750" watchObservedRunningTime="2025-12-12 16:17:12.70851059 +0000 UTC m=+127.172722346" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.737093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" event={"ID":"ad073d6b-f522-47b7-a45f-1c4ae18f9a10","Type":"ContainerStarted","Data":"ce76a270dc8035e4e94556f15bd6e6f745bb51d3a3b718806441280ecb990a04"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.762322 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.783531 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.786765 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.286745512 +0000 UTC m=+127.750957268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.788371 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" podStartSLOduration=105.788352645 podStartE2EDuration="1m45.788352645s" podCreationTimestamp="2025-12-12 16:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.736458211 +0000 UTC m=+127.200669977" watchObservedRunningTime="2025-12-12 16:17:12.788352645 +0000 UTC m=+127.252564401" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.790566 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" event={"ID":"8d377873-6680-42c5-afb1-52f63ffff4a4","Type":"ContainerStarted","Data":"325a90f31260ba262e95e01970751caa8f67e39b2bd57e0104d0ff186e22bedc"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.795795 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ck6ws" podStartSLOduration=106.795764364 podStartE2EDuration="1m46.795764364s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.782072537 +0000 UTC m=+127.246284283" watchObservedRunningTime="2025-12-12 16:17:12.795764364 +0000 UTC m=+127.259976120" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.806328 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" event={"ID":"d48aaed9-8c63-4ef9-823b-1c58fadbcc17","Type":"ContainerStarted","Data":"8e380a2354435d793ee8d4cb21504f35ce1b726a8af98fef62316815aeacd49e"} Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.859476 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xfkjr" podStartSLOduration=106.859448906 podStartE2EDuration="1m46.859448906s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.853493456 +0000 UTC m=+127.317705232" watchObservedRunningTime="2025-12-12 16:17:12.859448906 +0000 UTC m=+127.323660662" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.884995 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.885180 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.385156006 +0000 UTC m=+127.849367762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.885773 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.895533 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.395482314 +0000 UTC m=+127.859694070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.908836 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.927033 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podStartSLOduration=8.927011471 podStartE2EDuration="8.927011471s" podCreationTimestamp="2025-12-12 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:12.926101107 +0000 UTC m=+127.390312873" watchObservedRunningTime="2025-12-12 16:17:12.927011471 +0000 UTC m=+127.391223227" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.936057 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:12 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:12 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:12 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.936196 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.986979 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.987265 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.487219128 +0000 UTC m=+127.951430894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:12 crc kubenswrapper[5116]: I1212 16:17:12.987790 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:12 crc kubenswrapper[5116]: E1212 16:17:12.988242 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.488234176 +0000 UTC m=+127.952445932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.018507 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-qhrd4" podStartSLOduration=107.018484579 podStartE2EDuration="1m47.018484579s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:13.015494578 +0000 UTC m=+127.479706334" watchObservedRunningTime="2025-12-12 16:17:13.018484579 +0000 UTC m=+127.482696335" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.039215 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4v8b9" podStartSLOduration=9.039199445 podStartE2EDuration="9.039199445s" podCreationTimestamp="2025-12-12 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:13.037676224 +0000 UTC m=+127.501887990" watchObservedRunningTime="2025-12-12 16:17:13.039199445 +0000 UTC m=+127.503411201" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.089386 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.089891 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.589846096 +0000 UTC m=+128.054057852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.090371 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.090865 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.590856303 +0000 UTC m=+128.055068059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.104750 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5vn44" podStartSLOduration=107.104731405 podStartE2EDuration="1m47.104731405s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:13.080987797 +0000 UTC m=+127.545199553" watchObservedRunningTime="2025-12-12 16:17:13.104731405 +0000 UTC m=+127.568943171" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.191470 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.192234 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.692208746 +0000 UTC m=+128.156420492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.192476 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.193317 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.693307185 +0000 UTC m=+128.157518941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.294825 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.295194 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.795176292 +0000 UTC m=+128.259388048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.387431 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57606: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.396033 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.396462 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.896447974 +0000 UTC m=+128.360659730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.483806 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57620: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.498933 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.499568 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:13.999543393 +0000 UTC m=+128.463755149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.552588 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57636: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.603583 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.604128 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.104085802 +0000 UTC m=+128.568297558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.634575 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57638: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.707039 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.707650 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.207627664 +0000 UTC m=+128.671839420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.753674 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57654: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.812970 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.813594 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.313564059 +0000 UTC m=+128.777775815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.849253 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tbppz"] Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.883898 5116 generic.go:358] "Generic (PLEG): container finished" podID="0f5297fd-58f7-4678-94d1-6afb8b1639cf" containerID="0242062f006d48bfbc1e1e4e93d36f73f2cd849cf4ea2567a0085a63cbd2401b" exitCode=0 Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.884131 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" event={"ID":"0f5297fd-58f7-4678-94d1-6afb8b1639cf","Type":"ContainerDied","Data":"0242062f006d48bfbc1e1e4e93d36f73f2cd849cf4ea2567a0085a63cbd2401b"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.884173 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" event={"ID":"0f5297fd-58f7-4678-94d1-6afb8b1639cf","Type":"ContainerStarted","Data":"237fd536bf76df75e7f89cd248704abff81c76798a60bb80de16e7b47744caa5"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.902530 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-gt5s5" event={"ID":"d2e8310b-4d7c-4c19-82af-587b427fc159","Type":"ContainerStarted","Data":"5d96bf53c45400533484a4427dd4c64054ae3569d2fdfdbb75f7966b61edf715"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.913758 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.913860 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.413841434 +0000 UTC m=+128.878053190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.914152 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:13 crc kubenswrapper[5116]: E1212 16:17:13.914517 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.414510142 +0000 UTC m=+128.878721898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.918325 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:13 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:13 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:13 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.918371 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.919265 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57662: no serving certificate available for the kubelet" Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.935663 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" event={"ID":"8d5bc6a0-fc54-4b74-bd8f-801b601f096d","Type":"ContainerStarted","Data":"834abfc6290f9f8d7e1ffe91cb1fd32acd54c3af114c6f7ba6927e36edaac318"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.935721 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" event={"ID":"8d5bc6a0-fc54-4b74-bd8f-801b601f096d","Type":"ContainerStarted","Data":"70c3bb08f5849beff6be04c4e67642a8f32a68807a614c51ac2ad5ed529a21a1"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.973035 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" event={"ID":"64b5e43d-0337-46b2-b4be-93dbb15ef982","Type":"ContainerStarted","Data":"6b9521ff77568987d47d61a5c224109a4a60d0adce72b483f2def2a584383b1e"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.973097 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" event={"ID":"64b5e43d-0337-46b2-b4be-93dbb15ef982","Type":"ContainerStarted","Data":"3d7cc287e4639a9e35578447f71fdafe8dc4bde55e881474885bd9eff3552d5e"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.986309 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" event={"ID":"95823ee2-7080-4b23-87d9-e69d42ab1787","Type":"ContainerStarted","Data":"83455c7c1816b7b724bf5b112504577fcb1d341bada3c79cce77c833804750fb"} Dec 12 16:17:13 crc kubenswrapper[5116]: I1212 16:17:13.986367 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" event={"ID":"95823ee2-7080-4b23-87d9-e69d42ab1787","Type":"ContainerStarted","Data":"b87f26b723a5815ba5e89b2c8afcc671e0718cf923455fe4c4b7523efdef3ea2"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.026037 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sh26n" podStartSLOduration=108.026015887 podStartE2EDuration="1m48.026015887s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:13.966833478 +0000 UTC m=+128.431045234" watchObservedRunningTime="2025-12-12 16:17:14.026015887 +0000 UTC m=+128.490227643" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.026229 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.026312 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.526288625 +0000 UTC m=+128.990500381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.027164 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k8t8q" podStartSLOduration=108.027158018 podStartE2EDuration="1m48.027158018s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.025698869 +0000 UTC m=+128.489910625" watchObservedRunningTime="2025-12-12 16:17:14.027158018 +0000 UTC m=+128.491369774" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.027444 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.029018 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.529003178 +0000 UTC m=+128.993214934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.086974 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" event={"ID":"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3","Type":"ContainerStarted","Data":"4a6fe96450437db87bb10b32ee93b1471b8d1986c463c70a367e75a91d185962"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.087042 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" event={"ID":"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3","Type":"ContainerStarted","Data":"c6c6adf1d7efb617d2d842e5e7f7fcfc9791ff815fe71743c55f83bea642f7ec"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.087174 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.127480 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57672: no serving certificate available for the kubelet" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.129281 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.130640 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.630617388 +0000 UTC m=+129.094829144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.134694 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" event={"ID":"753e4cbf-dd62-4448-ab39-6f28a23c7ca2","Type":"ContainerStarted","Data":"a287ff298f5ad5264f5042e61e5360b265380657a357aa755493f7ae40261145"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.137870 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" podStartSLOduration=108.137860832 podStartE2EDuration="1m48.137860832s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.136520386 +0000 UTC m=+128.600732142" watchObservedRunningTime="2025-12-12 16:17:14.137860832 +0000 UTC m=+128.602072588" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.165870 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" podStartSLOduration=108.165846784 podStartE2EDuration="1m48.165846784s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.163723568 +0000 UTC m=+128.627935324" watchObservedRunningTime="2025-12-12 16:17:14.165846784 +0000 UTC m=+128.630058540" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.172021 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" event={"ID":"1da6019f-ecaf-43cc-8df2-cddce4345203","Type":"ContainerStarted","Data":"4f6696fc9a86115a52151a25766739590e8fb70f7b6a92166f6f6a8618a86b3a"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.172086 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" event={"ID":"1da6019f-ecaf-43cc-8df2-cddce4345203","Type":"ContainerStarted","Data":"92ba96cdd1e207e23e044d24ebc488ecc8a71a6aff7c81d5ac6c13af2500ab3a"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.236351 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.237137 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-97glr" event={"ID":"ed4930f4-4d37-415e-a712-9574322f6ccc","Type":"ContainerStarted","Data":"cd8d27a4a8a400584e8cf0d131e45fae088cd87dfa99af70308f4fc56e7661e1"} Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.237905 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.737889289 +0000 UTC m=+129.202101045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.238665 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-97glr" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.266344 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qrb8l" podStartSLOduration=108.266298543 podStartE2EDuration="1m48.266298543s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.228902768 +0000 UTC m=+128.693114544" watchObservedRunningTime="2025-12-12 16:17:14.266298543 +0000 UTC m=+128.730510289" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.278885 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" event={"ID":"b80cc078-24bf-4a75-b2ae-76f252e843f9","Type":"ContainerStarted","Data":"ae8d926e55f7bcb5de993de86c6aeecb53834e3be6e4d1ea9eb105f9f1dc1558"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.278957 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" event={"ID":"b80cc078-24bf-4a75-b2ae-76f252e843f9","Type":"ContainerStarted","Data":"87564990a85e218ff496a83e7deae435f8dfb689363883eeeba90507d97a652b"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.319430 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-97glr" podStartSLOduration=10.31940094 podStartE2EDuration="10.31940094s" podCreationTimestamp="2025-12-12 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.298379815 +0000 UTC m=+128.762591571" watchObservedRunningTime="2025-12-12 16:17:14.31940094 +0000 UTC m=+128.783612696" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.321168 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" event={"ID":"4a88744b-ced0-4609-bede-f65d27510b47","Type":"ContainerStarted","Data":"a053060aeb21714ef3ab845390ad257aab158432589556bd135c79de6f6c60db"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.342582 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" event={"ID":"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6","Type":"ContainerStarted","Data":"7a09e52d6c5361af891dc53ed6e15d5dd3d8869c7ea0f172cd62c2d10a37761f"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.342639 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" event={"ID":"6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6","Type":"ContainerStarted","Data":"d728c9ba68b45bc424e517685e938d92e9b6407aded529b9e91849b4033991fd"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.343578 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" podStartSLOduration=108.343554299 podStartE2EDuration="1m48.343554299s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.342068549 +0000 UTC m=+128.806280305" watchObservedRunningTime="2025-12-12 16:17:14.343554299 +0000 UTC m=+128.807766045" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.343685 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.343735 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.348029 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.848003198 +0000 UTC m=+129.312214954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.367237 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" podStartSLOduration=108.367214205 podStartE2EDuration="1m48.367214205s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.366835995 +0000 UTC m=+128.831047751" watchObservedRunningTime="2025-12-12 16:17:14.367214205 +0000 UTC m=+128.831425961" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.373455 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" event={"ID":"2a15c5fa-bcc2-4558-a2cb-82ad217e3f1e","Type":"ContainerStarted","Data":"e5f27e2d41b8a373c3eb3ce5eec993f9af42b53502d56d20af9f8f3cf13f33f5"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.374539 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.387742 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.396938 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" event={"ID":"dd889f29-959e-4c5f-b7d0-44e2ef38dc22","Type":"ContainerStarted","Data":"f8c8c8e5342e2d9fac34bad0e9916c8ffad120d46490b1021039870db3f98254"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.397002 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" event={"ID":"dd889f29-959e-4c5f-b7d0-44e2ef38dc22","Type":"ContainerStarted","Data":"34ceb65ab6b39ce04d6a8b690404a0cf83a139d457d63dc35bb7967ceaa4d55e"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.399806 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rtljx" podStartSLOduration=108.3997895 podStartE2EDuration="1m48.3997895s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.398665289 +0000 UTC m=+128.862877045" watchObservedRunningTime="2025-12-12 16:17:14.3997895 +0000 UTC m=+128.864001256" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.406442 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" event={"ID":"572c6180-44e5-4299-afe5-a5483f6e0711","Type":"ContainerStarted","Data":"0656ecd1853b7d52861a429208904f1d0a2b190138ee31d61c891e5bad4a0c7f"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.414875 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" event={"ID":"d48aaed9-8c63-4ef9-823b-1c58fadbcc17","Type":"ContainerStarted","Data":"983c73d6151ef8f8f7a5e25eb22b5301ed68208353f18ba4fbbdbe1e7413bb63"} Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.414920 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.417826 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lkvbc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.417917 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.417985 5116 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-mbmd7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.418072 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" podUID="d48aaed9-8c63-4ef9-823b-1c58fadbcc17" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.418400 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-g5nbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.418456 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-g5nbl" podUID="24053646-aeb7-426b-8065-63075e9aa0c8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.428137 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" podStartSLOduration=108.428082559 podStartE2EDuration="1m48.428082559s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.423220849 +0000 UTC m=+128.887432615" watchObservedRunningTime="2025-12-12 16:17:14.428082559 +0000 UTC m=+128.892294315" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.432063 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.435978 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.436061 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qh8zt" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.444926 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.446206 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:14.946193816 +0000 UTC m=+129.410405572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.497077 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" podStartSLOduration=108.497056693 podStartE2EDuration="1m48.497056693s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.495305486 +0000 UTC m=+128.959517242" watchObservedRunningTime="2025-12-12 16:17:14.497056693 +0000 UTC m=+128.961268449" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.511294 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57684: no serving certificate available for the kubelet" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.551091 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.556433 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.056407828 +0000 UTC m=+129.520619584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.565259 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.574643 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.074621137 +0000 UTC m=+129.538832893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.610986 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-ng2wp" podStartSLOduration=108.610970524 podStartE2EDuration="1m48.610970524s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:14.55799301 +0000 UTC m=+129.022204766" watchObservedRunningTime="2025-12-12 16:17:14.610970524 +0000 UTC m=+129.075182280" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.666716 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.667187 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.167166623 +0000 UTC m=+129.631378379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.769215 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.769529 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.269516143 +0000 UTC m=+129.733727889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.870783 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.871234 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.371215526 +0000 UTC m=+129.835427282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.917473 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:14 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:14 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:14 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.917578 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:14 crc kubenswrapper[5116]: I1212 16:17:14.972731 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:14 crc kubenswrapper[5116]: E1212 16:17:14.973082 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.473068952 +0000 UTC m=+129.937280708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.074162 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.074636 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.57462027 +0000 UTC m=+130.038832026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.176136 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.176516 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.676503248 +0000 UTC m=+130.140714994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.240613 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57696: no serving certificate available for the kubelet" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.277718 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.278055 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.778013734 +0000 UTC m=+130.242225490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.278401 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.278781 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.778764695 +0000 UTC m=+130.242976451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.344390 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6fnz8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.344584 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" podUID="6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.380513 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.380646 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.880620572 +0000 UTC m=+130.344832328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.381075 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.381679 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.881656049 +0000 UTC m=+130.345867805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.419745 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" event={"ID":"4a88744b-ced0-4609-bede-f65d27510b47","Type":"ContainerStarted","Data":"b5549ff831a5948b56b04d655531679bbd7fe142338ad4afb94689f854f78a4b"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.422589 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-87slr" event={"ID":"dd889f29-959e-4c5f-b7d0-44e2ef38dc22","Type":"ContainerStarted","Data":"6c39996a169e34000f022e91a749b31dc60b8a7434f05c08d07749ed39fb0efd"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.425313 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" event={"ID":"57dbe731-30cc-45f4-b457-346f62af94fa","Type":"ContainerStarted","Data":"39e2eb96429a7e82622e6143ca78f352a2dd2198b8e52ea5d0e45ec3294c0563"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.426847 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d377873-6680-42c5-afb1-52f63ffff4a4" containerID="2ef870a9b4c33bf4c32b0979f65be74690926314c79106d1bb106aaabb50334d" exitCode=0 Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.426913 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" event={"ID":"8d377873-6680-42c5-afb1-52f63ffff4a4","Type":"ContainerDied","Data":"2ef870a9b4c33bf4c32b0979f65be74690926314c79106d1bb106aaabb50334d"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.428711 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-648v2" event={"ID":"957f59ba-d9a7-424b-94bb-8899126450ed","Type":"ContainerStarted","Data":"c67667e4686224caced47ba99727a52e6080fba285a107158e7cbfe8ff06c96a"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.432313 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" event={"ID":"0f5297fd-58f7-4678-94d1-6afb8b1639cf","Type":"ContainerStarted","Data":"ad818719441fd86d741fae975b36562d373d1d619792f3558a347b39723c8a3c"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.435086 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" event={"ID":"41bfba7f-9125-4770-99ea-3b72ddc0173b","Type":"ContainerStarted","Data":"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.439233 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" event={"ID":"64b5e43d-0337-46b2-b4be-93dbb15ef982","Type":"ContainerStarted","Data":"e39889db4afd569eadeb06c5284f933967c3dd8aea7f1225a50c50872427c102"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.440984 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.441053 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.461629 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" event={"ID":"e30a5a66-1aa8-4e8b-8ca1-9796e082f7d3","Type":"ContainerStarted","Data":"87fa44a9d23e1243bc6c1d616989474a480e1a8094fa5d2e7ff8425f4854b6e4"} Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.464196 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-g5nbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.464246 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-g5nbl" podUID="24053646-aeb7-426b-8065-63075e9aa0c8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.467213 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" gracePeriod=30 Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.481176 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mbmd7" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.482667 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.483191 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:15.983169387 +0000 UTC m=+130.447381143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.486125 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-k9w8q" podStartSLOduration=109.486113426 podStartE2EDuration="1m49.486113426s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:15.484473812 +0000 UTC m=+129.948685568" watchObservedRunningTime="2025-12-12 16:17:15.486113426 +0000 UTC m=+129.950325182" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.567763 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" podStartSLOduration=109.567747749 podStartE2EDuration="1m49.567747749s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:15.563902426 +0000 UTC m=+130.028114192" watchObservedRunningTime="2025-12-12 16:17:15.567747749 +0000 UTC m=+130.031959505" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.590546 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.596025 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.096009118 +0000 UTC m=+130.560220874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.630955 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xztm9" podStartSLOduration=109.630930546 podStartE2EDuration="1m49.630930546s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:15.605304318 +0000 UTC m=+130.069516084" watchObservedRunningTime="2025-12-12 16:17:15.630930546 +0000 UTC m=+130.095142302" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.697583 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.698190 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.198169153 +0000 UTC m=+130.662380919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.726908 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" podStartSLOduration=109.726889085 podStartE2EDuration="1m49.726889085s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:15.723057291 +0000 UTC m=+130.187269057" watchObservedRunningTime="2025-12-12 16:17:15.726889085 +0000 UTC m=+130.191100841" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.754534 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" podStartSLOduration=109.754509997 podStartE2EDuration="1m49.754509997s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:15.753813188 +0000 UTC m=+130.218024944" watchObservedRunningTime="2025-12-12 16:17:15.754509997 +0000 UTC m=+130.218721753" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.799448 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.799851 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.299838255 +0000 UTC m=+130.764050011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.896836 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.900432 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:15 crc kubenswrapper[5116]: E1212 16:17:15.900838 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.400817757 +0000 UTC m=+130.865029513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.911606 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:15 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:15 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:15 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.911686 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.934266 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.943446 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.948046 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.948599 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 16:17:15 crc kubenswrapper[5116]: I1212 16:17:15.960073 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.001848 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.001894 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.001922 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.002414 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.502391947 +0000 UTC m=+130.966603883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.103265 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.103714 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.603677298 +0000 UTC m=+131.067889164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.103848 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.103937 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.104141 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.104527 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.60450183 +0000 UTC m=+131.068713586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.104900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.152604 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.156048 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.167761 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.171206 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.176349 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.205376 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.205839 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.705799111 +0000 UTC m=+131.170010867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.205939 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56k45\" (UniqueName: \"kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.205993 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.206097 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.206285 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.206609 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.706598783 +0000 UTC m=+131.170810729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.279317 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.311859 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.312410 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.812355304 +0000 UTC m=+131.276567070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.313170 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-56k45\" (UniqueName: \"kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.313245 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.313371 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.313599 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.314006 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.813977118 +0000 UTC m=+131.278188864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.314888 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.316699 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.331747 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.339713 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-56k45\" (UniqueName: \"kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45\") pod \"certified-operators-zt54g\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.345507 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.349494 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.356803 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.419736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.419955 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzkd\" (UniqueName: \"kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.419983 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.420014 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.420368 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:16.920316725 +0000 UTC m=+131.384528481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.466794 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6fnz8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded" start-of-body= Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.466915 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" podUID="6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.477545 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" event={"ID":"8d377873-6680-42c5-afb1-52f63ffff4a4","Type":"ContainerStarted","Data":"ec3f7c4a240f2c0620dae93eacaf33672cc86bea0ac8d82f1afbb741df7ed2f4"} Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.518669 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.527550 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.548081 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzkd\" (UniqueName: \"kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.551944 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.552024 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.552449 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.551223 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.553480 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.551434 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.555137 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.557843 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.057825459 +0000 UTC m=+131.522037215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.578121 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57712: no serving certificate available for the kubelet" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.602597 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzkd\" (UniqueName: \"kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd\") pod \"community-operators-mksww\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.654957 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.655493 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.655527 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wzbj\" (UniqueName: \"kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.655635 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.655769 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.15575195 +0000 UTC m=+131.619963706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.686467 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.730569 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.756602 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wzbj\" (UniqueName: \"kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.756720 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.756759 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.756792 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.757247 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.757739 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.757980 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.257967426 +0000 UTC m=+131.722179182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.800332 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wzbj\" (UniqueName: \"kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj\") pod \"certified-operators-7qxjm\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.858170 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.858319 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.358288752 +0000 UTC m=+131.822500518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.865980 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.866929 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.366908003 +0000 UTC m=+131.831119779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.912971 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:16 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:16 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:16 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.913058 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.923549 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:17:16 crc kubenswrapper[5116]: W1212 16:17:16.952459 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33c5b2d_507a_41c8_884d_e5ec63c2894c.slice/crio-3c124c5de9b2a466f8f72cea880f2205e141255528fbfd3c1b2722b0c844d209 WatchSource:0}: Error finding container 3c124c5de9b2a466f8f72cea880f2205e141255528fbfd3c1b2722b0c844d209: Status 404 returned error can't find the container with id 3c124c5de9b2a466f8f72cea880f2205e141255528fbfd3c1b2722b0c844d209 Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.968814 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.969013 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.468964895 +0000 UTC m=+131.933176651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:16 crc kubenswrapper[5116]: I1212 16:17:16.969308 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:16 crc kubenswrapper[5116]: E1212 16:17:16.969776 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.469750867 +0000 UTC m=+131.933962623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.070942 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.071229 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.571188212 +0000 UTC m=+132.035399968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.071756 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.072172 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.572155808 +0000 UTC m=+132.036367564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: W1212 16:17:17.073687 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9629b0_298f_4c07_a908_e83a59c4c402.slice/crio-b697ff7843247cef5dcf863efe90098ea1be941bad228a8f89c4cb0e92c0364c WatchSource:0}: Error finding container b697ff7843247cef5dcf863efe90098ea1be941bad228a8f89c4cb0e92c0364c: Status 404 returned error can't find the container with id b697ff7843247cef5dcf863efe90098ea1be941bad228a8f89c4cb0e92c0364c Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.173057 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.173256 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.673228033 +0000 UTC m=+132.137439789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.173732 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.174207 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.674189229 +0000 UTC m=+132.138400975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: W1212 16:17:17.179253 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f797462_5a8d_4865_ac29_f49ef38173d2.slice/crio-d88ac35a9d4eb62d7533935d22e8145aedb9a517a24260361616215caff3e655 WatchSource:0}: Error finding container d88ac35a9d4eb62d7533935d22e8145aedb9a517a24260361616215caff3e655: Status 404 returned error can't find the container with id d88ac35a9d4eb62d7533935d22e8145aedb9a517a24260361616215caff3e655 Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.274994 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.275437 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.775417879 +0000 UTC m=+132.239629635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.376435 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.376823 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.876810643 +0000 UTC m=+132.341022399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.478120 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.478245 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.978212077 +0000 UTC m=+132.442423833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.478688 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.479130 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:17.97909111 +0000 UTC m=+132.443302866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.485820 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6fnz8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.485865 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" podUID="6bf89047-4fcf-49da-8a5e-56f2a5a2b6c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.580628 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.581004 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.080986828 +0000 UTC m=+132.545198584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.682449 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.682871 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.182852135 +0000 UTC m=+132.647063901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.784187 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.784503 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.284467655 +0000 UTC m=+132.748679421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.784596 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.785080 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.285067852 +0000 UTC m=+132.749279608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.886663 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.886802 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.386781915 +0000 UTC m=+132.850993671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.887469 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.887781 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.387771901 +0000 UTC m=+132.851983657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.911498 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:17 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:17 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:17 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.911586 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:17 crc kubenswrapper[5116]: I1212 16:17:17.988349 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:17 crc kubenswrapper[5116]: E1212 16:17:17.988797 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.488768754 +0000 UTC m=+132.952980510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029450 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" event={"ID":"8d377873-6680-42c5-afb1-52f63ffff4a4","Type":"ContainerStarted","Data":"24dbcb177fcc89924bdd24811b7f3a8a6b815428d3ee07531837bf3af76131fd"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029549 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029619 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerStarted","Data":"3c124c5de9b2a466f8f72cea880f2205e141255528fbfd3c1b2722b0c844d209"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029650 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029677 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029701 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029721 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerStarted","Data":"b697ff7843247cef5dcf863efe90098ea1be941bad228a8f89c4cb0e92c0364c"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029738 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029760 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerStarted","Data":"d88ac35a9d4eb62d7533935d22e8145aedb9a517a24260361616215caff3e655"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.029776 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a301e-a8e4-44e7-825d-63df4c3cc031","Type":"ContainerStarted","Data":"ba14eac5be2c46c644a62ae05cafd81e4bd2a59f0faf8379c7115ec20185feed"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.030005 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.090551 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.090863 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822mk\" (UniqueName: \"kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.091006 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.59098832 +0000 UTC m=+133.055200076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.091274 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.091379 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.122213 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.193285 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.193515 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.693486905 +0000 UTC m=+133.157698651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.193639 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-822mk\" (UniqueName: \"kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.193704 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.193760 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.193919 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.194324 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.694314586 +0000 UTC m=+133.158526342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.194988 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.195042 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.220679 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-822mk\" (UniqueName: \"kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk\") pod \"community-operators-vz4rg\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.295907 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.296196 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.796158983 +0000 UTC m=+133.260370749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.296782 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.297217 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.797198901 +0000 UTC m=+133.261410657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.344892 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.398788 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.401003 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:18.900975249 +0000 UTC m=+133.365187025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.424828 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.424975 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-24z4m" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.425560 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.428457 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.501262 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.501342 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.501504 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27c6\" (UniqueName: \"kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.501604 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.502084 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.002066425 +0000 UTC m=+133.466278181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.502987 5116 generic.go:358] "Generic (PLEG): container finished" podID="b80cc078-24bf-4a75-b2ae-76f252e843f9" containerID="ae8d926e55f7bcb5de993de86c6aeecb53834e3be6e4d1ea9eb105f9f1dc1558" exitCode=0 Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.503318 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" event={"ID":"b80cc078-24bf-4a75-b2ae-76f252e843f9","Type":"ContainerDied","Data":"ae8d926e55f7bcb5de993de86c6aeecb53834e3be6e4d1ea9eb105f9f1dc1558"} Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.542271 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.569271 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" podStartSLOduration=112.56924837 podStartE2EDuration="1m52.56924837s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:18.567720169 +0000 UTC m=+133.031931925" watchObservedRunningTime="2025-12-12 16:17:18.56924837 +0000 UTC m=+133.033460126" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.609522 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.611273 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.111238578 +0000 UTC m=+133.575450334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.617576 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.617742 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.618145 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b27c6\" (UniqueName: \"kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.618258 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.621086 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.621300 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.121281618 +0000 UTC m=+133.585493374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.623829 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.658558 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27c6\" (UniqueName: \"kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6\") pod \"redhat-marketplace-2qt2j\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: W1212 16:17:18.702525 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc7f231b_6d94_4157_8f03_efca4baf4da2.slice/crio-cd5f4821ccf74098c162143faf75954decdd6ae9f9bb469158b98fdd2fb6f610 WatchSource:0}: Error finding container cd5f4821ccf74098c162143faf75954decdd6ae9f9bb469158b98fdd2fb6f610: Status 404 returned error can't find the container with id cd5f4821ccf74098c162143faf75954decdd6ae9f9bb469158b98fdd2fb6f610 Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.720218 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.720697 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.220672758 +0000 UTC m=+133.684884514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.821836 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.822194 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.32218039 +0000 UTC m=+133.786392146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.843539 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.843644 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.843668 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.872315 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.912552 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:18 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:18 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:18 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.912647 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.927750 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.927979 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.427934782 +0000 UTC m=+133.892146538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.929629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.929687 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.930039 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:18 crc kubenswrapper[5116]: I1212 16:17:18.930330 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnb5\" (UniqueName: \"kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:18 crc kubenswrapper[5116]: E1212 16:17:18.930608 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.430590454 +0000 UTC m=+133.894802210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.032036 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.032600 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnb5\" (UniqueName: \"kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.032700 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.032727 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.033469 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.033550 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.533534111 +0000 UTC m=+133.997745857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.034119 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.060256 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnb5\" (UniqueName: \"kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5\") pod \"redhat-marketplace-wh7sg\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.124845 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.136135 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.136571 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.137710 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.63768932 +0000 UTC m=+134.101901076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.137086 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.146165 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.167705 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46296: no serving certificate available for the kubelet" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.173154 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.186519 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pq598" Dec 12 16:17:19 crc kubenswrapper[5116]: W1212 16:17:19.183721 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85c27f2_e8ee_400f_8f2a_5e389b670e09.slice/crio-91f2dfdc602b4ad3fcaa2f2b197725a2e9881e76fd788d76545d20a209581195 WatchSource:0}: Error finding container 91f2dfdc602b4ad3fcaa2f2b197725a2e9881e76fd788d76545d20a209581195: Status 404 returned error can't find the container with id 91f2dfdc602b4ad3fcaa2f2b197725a2e9881e76fd788d76545d20a209581195 Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.225924 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-g5nbl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.226003 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-g5nbl" podUID="24053646-aeb7-426b-8065-63075e9aa0c8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.241128 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.241694 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.741668904 +0000 UTC m=+134.205880660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.338172 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.343581 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.344030 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.844012004 +0000 UTC m=+134.308223760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.384062 5116 patch_prober.go:28] interesting pod/console-64d44f6ddf-qhrd4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.384615 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-qhrd4" podUID="b68bd1cf-aa0c-43e2-a771-11c6c91d19dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.417420 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.417483 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.417507 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.417945 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.424484 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.445023 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.445431 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:19.945400249 +0000 UTC m=+134.409612005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.529819 5116 generic.go:358] "Generic (PLEG): container finished" podID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerID="67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7" exitCode=0 Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.529940 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerDied","Data":"67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.536731 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerID="09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4" exitCode=0 Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.536888 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerDied","Data":"09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.545446 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.554580 5116 generic.go:358] "Generic (PLEG): container finished" podID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerID="d59b96084d881256cf230fb4c499ba0c551e68eac2147e9f64d58d73d6f3c162" exitCode=0 Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.554765 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerDied","Data":"d59b96084d881256cf230fb4c499ba0c551e68eac2147e9f64d58d73d6f3c162"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.554803 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerStarted","Data":"cd5f4821ccf74098c162143faf75954decdd6ae9f9bb469158b98fdd2fb6f610"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.558855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.558992 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.559095 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.560021 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.060003302 +0000 UTC m=+134.524215058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.560248 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z884\" (UniqueName: \"kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.565919 5116 generic.go:358] "Generic (PLEG): container finished" podID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerID="3945fc05432fd9da74da13d37e668b037fdb76e934c58da3e0a8316fc2d8064d" exitCode=0 Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.566147 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerDied","Data":"3945fc05432fd9da74da13d37e668b037fdb76e934c58da3e0a8316fc2d8064d"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.575006 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a301e-a8e4-44e7-825d-63df4c3cc031","Type":"ContainerStarted","Data":"1c73ed400142bd7831a7f9985bf2d10efc345fc49c76683c43a4863dc6b407bb"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.597422 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerStarted","Data":"91f2dfdc602b4ad3fcaa2f2b197725a2e9881e76fd788d76545d20a209581195"} Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.670468 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.671075 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.671214 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.671339 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8z884\" (UniqueName: \"kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.670477 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=4.67045586 podStartE2EDuration="4.67045586s" podCreationTimestamp="2025-12-12 16:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:19.669798453 +0000 UTC m=+134.134010209" watchObservedRunningTime="2025-12-12 16:17:19.67045586 +0000 UTC m=+134.134667616" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.672793 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.172774634 +0000 UTC m=+134.636986390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.675466 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.675684 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.707174 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z884\" (UniqueName: \"kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884\") pod \"redhat-operators-zmzmp\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.759518 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.772946 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.776493 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.776959 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.276942494 +0000 UTC m=+134.741154250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.803398 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.878375 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.878736 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.878853 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxglf\" (UniqueName: \"kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.878914 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.879041 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.379017136 +0000 UTC m=+134.843228892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.912029 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.914451 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:19 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:19 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:19 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.914651 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.919463 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.980557 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.981339 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.981454 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxglf\" (UniqueName: \"kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.981593 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.982192 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: I1212 16:17:19.982350 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:19 crc kubenswrapper[5116]: E1212 16:17:19.982633 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.48259747 +0000 UTC m=+134.946809226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.008211 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxglf\" (UniqueName: \"kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf\") pod \"redhat-operators-bnzrx\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.032417 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.084811 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.085049 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume\") pod \"b80cc078-24bf-4a75-b2ae-76f252e843f9\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.085163 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkd22\" (UniqueName: \"kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22\") pod \"b80cc078-24bf-4a75-b2ae-76f252e843f9\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.085195 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume\") pod \"b80cc078-24bf-4a75-b2ae-76f252e843f9\" (UID: \"b80cc078-24bf-4a75-b2ae-76f252e843f9\") " Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.085819 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.585801723 +0000 UTC m=+135.050013479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.086494 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "b80cc078-24bf-4a75-b2ae-76f252e843f9" (UID: "b80cc078-24bf-4a75-b2ae-76f252e843f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.093425 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22" (OuterVolumeSpecName: "kube-api-access-dkd22") pod "b80cc078-24bf-4a75-b2ae-76f252e843f9" (UID: "b80cc078-24bf-4a75-b2ae-76f252e843f9"). InnerVolumeSpecName "kube-api-access-dkd22". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.099232 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b80cc078-24bf-4a75-b2ae-76f252e843f9" (UID: "b80cc078-24bf-4a75-b2ae-76f252e843f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.178709 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.188483 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.188671 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dkd22\" (UniqueName: \"kubernetes.io/projected/b80cc078-24bf-4a75-b2ae-76f252e843f9-kube-api-access-dkd22\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.188692 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80cc078-24bf-4a75-b2ae-76f252e843f9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.188704 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80cc078-24bf-4a75-b2ae-76f252e843f9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.189193 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.689171412 +0000 UTC m=+135.153383168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.288579 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.292864 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.293519 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.793481685 +0000 UTC m=+135.257693441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.294069 5116 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.328809 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.328852 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.394549 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.394870 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.894856409 +0000 UTC m=+135.359068165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.496523 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.497279 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:20.997260681 +0000 UTC m=+135.461472437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.599273 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.599823 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.099803806 +0000 UTC m=+135.564015562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.627706 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" event={"ID":"b80cc078-24bf-4a75-b2ae-76f252e843f9","Type":"ContainerDied","Data":"87564990a85e218ff496a83e7deae435f8dfb689363883eeeba90507d97a652b"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.627725 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-jlznq" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.627756 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87564990a85e218ff496a83e7deae435f8dfb689363883eeeba90507d97a652b" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.635387 5116 generic.go:358] "Generic (PLEG): container finished" podID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerID="60aacfb9aae3138ed46a183ad77b3979bb962bfe60c4437b2347caa6ff6fbdf0" exitCode=0 Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.635800 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerDied","Data":"60aacfb9aae3138ed46a183ad77b3979bb962bfe60c4437b2347caa6ff6fbdf0"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.635840 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerStarted","Data":"8e8d1062638fbc844b83c335679cb633b8a3f091c3bdceca41e7d9a7389ab918"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.644956 5116 generic.go:358] "Generic (PLEG): container finished" podID="f22a301e-a8e4-44e7-825d-63df4c3cc031" containerID="1c73ed400142bd7831a7f9985bf2d10efc345fc49c76683c43a4863dc6b407bb" exitCode=0 Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.645291 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a301e-a8e4-44e7-825d-63df4c3cc031","Type":"ContainerDied","Data":"1c73ed400142bd7831a7f9985bf2d10efc345fc49c76683c43a4863dc6b407bb"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.647367 5116 generic.go:358] "Generic (PLEG): container finished" podID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerID="3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592" exitCode=0 Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.647544 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerDied","Data":"3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.662450 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-648v2" event={"ID":"957f59ba-d9a7-424b-94bb-8899126450ed","Type":"ContainerStarted","Data":"f28ae40e731ab10a5c44aee3f34c955ae3756d57369f406e0ed3c1e0efb779a3"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.662536 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-648v2" event={"ID":"957f59ba-d9a7-424b-94bb-8899126450ed","Type":"ContainerStarted","Data":"2376b69c1c78eda55609ab5e363cc7ff1d8af65319f0c0023c30471b1d532f2a"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.664911 5116 generic.go:358] "Generic (PLEG): container finished" podID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerID="e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e" exitCode=0 Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.665356 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerDied","Data":"e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.665427 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerStarted","Data":"1cfa1da787c6fc2ad08c84461b4ab4c34831254f8dcfbf9ca60ece99fb4cf1d5"} Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.703079 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.704409 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.204369957 +0000 UTC m=+135.668581713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.730657 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.804881 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.805722 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.30570888 +0000 UTC m=+135.769920636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.873431 5116 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-svwnw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]log ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]etcd ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/max-in-flight-filter ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 16:17:20 crc kubenswrapper[5116]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 16:17:20 crc kubenswrapper[5116]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-startinformers ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 16:17:20 crc kubenswrapper[5116]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 16:17:20 crc kubenswrapper[5116]: livez check failed Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.873511 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" podUID="8d377873-6680-42c5-afb1-52f63ffff4a4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.875955 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.876698 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b80cc078-24bf-4a75-b2ae-76f252e843f9" containerName="collect-profiles" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.876724 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80cc078-24bf-4a75-b2ae-76f252e843f9" containerName="collect-profiles" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.876852 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="b80cc078-24bf-4a75-b2ae-76f252e843f9" containerName="collect-profiles" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.895711 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.895867 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.910230 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:20 crc kubenswrapper[5116]: E1212 16:17:20.910567 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.410548598 +0000 UTC m=+135.874760354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.910616 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.910682 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.914668 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:20 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:20 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:20 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:20 crc kubenswrapper[5116]: I1212 16:17:20.914731 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.020281 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.020964 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.021207 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: E1212 16:17:21.021574 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.521561353 +0000 UTC m=+135.985773109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.123584 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.123839 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.123907 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: E1212 16:17:21.124579 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.624560881 +0000 UTC m=+136.088772637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.124648 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.152880 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.225268 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: E1212 16:17:21.226131 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:17:21.726113101 +0000 UTC m=+136.190324857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qgtsr" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.243737 5116 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T16:17:20.294086092Z","UUID":"a49b6e9a-2aee-4d9c-b1c1-2807daebf9dc","Handler":null,"Name":"","Endpoint":""} Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.255674 5116 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.255719 5116 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.271769 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.327137 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.344149 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.430355 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.441083 5116 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.441231 5116 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.480507 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-97glr" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.565291 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qgtsr\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.695545 5116 generic.go:358] "Generic (PLEG): container finished" podID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerID="b3bbd5bdba90de48480f797099c9fcf8549d5ce6f53893d856ceaec95039dce0" exitCode=0 Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.696182 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerDied","Data":"b3bbd5bdba90de48480f797099c9fcf8549d5ce6f53893d856ceaec95039dce0"} Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.696258 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerStarted","Data":"e85db79263519bb23075d0f1302325ad1beec8d93061c85834ae4c13d523f4cc"} Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.723879 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-648v2" event={"ID":"957f59ba-d9a7-424b-94bb-8899126450ed","Type":"ContainerStarted","Data":"fb750a844ef47867338459152e23988916abf871ad47b633cc468712516cfc0b"} Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.790829 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.791986 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-648v2" podStartSLOduration=17.791951177 podStartE2EDuration="17.791951177s" podCreationTimestamp="2025-12-12 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:21.772379667 +0000 UTC m=+136.236591443" watchObservedRunningTime="2025-12-12 16:17:21.791951177 +0000 UTC m=+136.256162933" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.793236 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.912993 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:21 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:21 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:21 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:21 crc kubenswrapper[5116]: I1212 16:17:21.913088 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.066784 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.237631 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.330900 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.404405 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.417961 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access\") pod \"f22a301e-a8e4-44e7-825d-63df4c3cc031\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.418054 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir\") pod \"f22a301e-a8e4-44e7-825d-63df4c3cc031\" (UID: \"f22a301e-a8e4-44e7-825d-63df4c3cc031\") " Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.418543 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f22a301e-a8e4-44e7-825d-63df4c3cc031" (UID: "f22a301e-a8e4-44e7-825d-63df4c3cc031"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.440319 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f22a301e-a8e4-44e7-825d-63df4c3cc031" (UID: "f22a301e-a8e4-44e7-825d-63df4c3cc031"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.521363 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a301e-a8e4-44e7-825d-63df4c3cc031-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.521412 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a301e-a8e4-44e7-825d-63df4c3cc031-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:22 crc kubenswrapper[5116]: E1212 16:17:22.605639 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:22 crc kubenswrapper[5116]: E1212 16:17:22.620744 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:22 crc kubenswrapper[5116]: E1212 16:17:22.627832 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:22 crc kubenswrapper[5116]: E1212 16:17:22.627958 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.751249 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" event={"ID":"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6","Type":"ContainerStarted","Data":"8161b51cde5fc591ec8869c9b43e6efcfc72ee321185aeee4ee9c8e7e0473927"} Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.760523 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a301e-a8e4-44e7-825d-63df4c3cc031","Type":"ContainerDied","Data":"ba14eac5be2c46c644a62ae05cafd81e4bd2a59f0faf8379c7115ec20185feed"} Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.760593 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba14eac5be2c46c644a62ae05cafd81e4bd2a59f0faf8379c7115ec20185feed" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.760716 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.767726 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0cf5962e-a354-421c-b535-0e905c73d5b1","Type":"ContainerStarted","Data":"32f2c6743030a7143fe88237b605db2457ffd1494eb0c59b29385afc1b910e6f"} Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.913174 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:22 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:22 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:22 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.913439 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.930984 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.931122 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.931156 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.931212 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.936488 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.948639 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.966958 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.967008 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:22 crc kubenswrapper[5116]: I1212 16:17:22.999792 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.007482 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.023454 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.067917 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.073955 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb955636-d9f0-41af-b498-6d380bb8ad2f-metrics-certs\") pod \"network-metrics-daemon-gbh7p\" (UID: \"eb955636-d9f0-41af-b498-6d380bb8ad2f\") " pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.315374 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbh7p" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.772193 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gbh7p"] Dec 12 16:17:23 crc kubenswrapper[5116]: W1212 16:17:23.790782 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb955636_d9f0_41af_b498_6d380bb8ad2f.slice/crio-bf3473764d2ce7db141edf84bd4ce6fc2b3137458d3ce79f9dcd7f4457e2a38e WatchSource:0}: Error finding container bf3473764d2ce7db141edf84bd4ce6fc2b3137458d3ce79f9dcd7f4457e2a38e: Status 404 returned error can't find the container with id bf3473764d2ce7db141edf84bd4ce6fc2b3137458d3ce79f9dcd7f4457e2a38e Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.802264 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0cf5962e-a354-421c-b535-0e905c73d5b1","Type":"ContainerStarted","Data":"1bac419325ee990a876766257e9798fca233f7d1136935187c413adabbafbf7d"} Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.814505 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"6955aa1e87f15c2ca02ea3df2ad0b421fe0058fb7b22b706455a7f887a724d99"} Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.819152 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"916cc0d6d70e7cd750739a09f10dfe33a4e1399cf45208168cc621b4e1801afd"} Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.827477 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" event={"ID":"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6","Type":"ContainerStarted","Data":"0c2caf6abc336b18b322ed6df1b8a7863ead2bd60c45aff9f520c34d4d8b569e"} Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.830390 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"9395550d018fa48af2d7aa2dbdb974acfaa3b418c1b7fb496cf03addf2b6a640"} Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.852956 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.852905533 podStartE2EDuration="3.852905533s" podCreationTimestamp="2025-12-12 16:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:23.840530428 +0000 UTC m=+138.304742194" watchObservedRunningTime="2025-12-12 16:17:23.852905533 +0000 UTC m=+138.317117289" Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.915327 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:23 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:23 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:23 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:23 crc kubenswrapper[5116]: I1212 16:17:23.915437 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:24 crc kubenswrapper[5116]: I1212 16:17:24.329318 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46310: no serving certificate available for the kubelet" Dec 12 16:17:24 crc kubenswrapper[5116]: I1212 16:17:24.424270 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:17:24 crc kubenswrapper[5116]: I1212 16:17:24.840965 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" event={"ID":"eb955636-d9f0-41af-b498-6d380bb8ad2f","Type":"ContainerStarted","Data":"bf3473764d2ce7db141edf84bd4ce6fc2b3137458d3ce79f9dcd7f4457e2a38e"} Dec 12 16:17:24 crc kubenswrapper[5116]: I1212 16:17:24.912364 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:24 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:24 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:24 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:24 crc kubenswrapper[5116]: I1212 16:17:24.913474 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:25 crc kubenswrapper[5116]: I1212 16:17:25.339767 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:25 crc kubenswrapper[5116]: I1212 16:17:25.484509 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-g5nbl" Dec 12 16:17:25 crc kubenswrapper[5116]: I1212 16:17:25.911818 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:25 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:25 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:25 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:25 crc kubenswrapper[5116]: I1212 16:17:25.912016 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:26 crc kubenswrapper[5116]: I1212 16:17:26.490741 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fnz8" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.006420 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:27 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:27 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:27 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.006538 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.006651 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.017322 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-svwnw" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.078487 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" podStartSLOduration=121.078459153 podStartE2EDuration="2m1.078459153s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:27.038486281 +0000 UTC m=+141.502698037" watchObservedRunningTime="2025-12-12 16:17:27.078459153 +0000 UTC m=+141.542670909" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.865865 5116 generic.go:358] "Generic (PLEG): container finished" podID="0cf5962e-a354-421c-b535-0e905c73d5b1" containerID="1bac419325ee990a876766257e9798fca233f7d1136935187c413adabbafbf7d" exitCode=0 Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.866012 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0cf5962e-a354-421c-b535-0e905c73d5b1","Type":"ContainerDied","Data":"1bac419325ee990a876766257e9798fca233f7d1136935187c413adabbafbf7d"} Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.870045 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"73bf9e5e76525aaca705e69a4a2f7f295bc9470589bb1f9e6c6824d416e40661"} Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.872000 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"3a36c96353e2133a12b9803de7f123c03afb64e903882d2a4cc5b27bf9a1e0c7"} Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.874349 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"95dbb85649fc67497099043a30b19905270384aed3ff9a8f55a8edfaa5761e2c"} Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.874777 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.911889 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:27 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 12 16:17:27 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:27 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:27 crc kubenswrapper[5116]: I1212 16:17:27.912284 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:28 crc kubenswrapper[5116]: I1212 16:17:28.882662 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" event={"ID":"eb955636-d9f0-41af-b498-6d380bb8ad2f","Type":"ContainerStarted","Data":"96e301727d6777c7cfe77a87527e69fb07e21f095c7a73669415e7e289288b04"} Dec 12 16:17:28 crc kubenswrapper[5116]: I1212 16:17:28.910876 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-sd7g8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:28 crc kubenswrapper[5116]: [+]has-synced ok Dec 12 16:17:28 crc kubenswrapper[5116]: [+]process-running ok Dec 12 16:17:28 crc kubenswrapper[5116]: healthz check failed Dec 12 16:17:28 crc kubenswrapper[5116]: I1212 16:17:28.911009 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" podUID="3df802e1-3f15-4f5d-ae4e-514d50ff8bde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:29 crc kubenswrapper[5116]: I1212 16:17:29.376471 5116 patch_prober.go:28] interesting pod/console-64d44f6ddf-qhrd4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 12 16:17:29 crc kubenswrapper[5116]: I1212 16:17:29.376887 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-qhrd4" podUID="b68bd1cf-aa0c-43e2-a771-11c6c91d19dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 12 16:17:29 crc kubenswrapper[5116]: I1212 16:17:29.741837 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:17:29 crc kubenswrapper[5116]: I1212 16:17:29.912505 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:29 crc kubenswrapper[5116]: I1212 16:17:29.916472 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-sd7g8" Dec 12 16:17:32 crc kubenswrapper[5116]: E1212 16:17:32.591619 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:32 crc kubenswrapper[5116]: E1212 16:17:32.593295 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:32 crc kubenswrapper[5116]: E1212 16:17:32.594762 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:32 crc kubenswrapper[5116]: E1212 16:17:32.594842 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:34 crc kubenswrapper[5116]: I1212 16:17:34.602465 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48428: no serving certificate available for the kubelet" Dec 12 16:17:37 crc kubenswrapper[5116]: I1212 16:17:37.884904 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:17:39 crc kubenswrapper[5116]: I1212 16:17:39.383481 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:39 crc kubenswrapper[5116]: I1212 16:17:39.388309 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-qhrd4" Dec 12 16:17:42 crc kubenswrapper[5116]: E1212 16:17:42.591328 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:42 crc kubenswrapper[5116]: E1212 16:17:42.593288 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:42 crc kubenswrapper[5116]: E1212 16:17:42.594673 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:42 crc kubenswrapper[5116]: E1212 16:17:42.594712 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:46 crc kubenswrapper[5116]: I1212 16:17:46.984040 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-tbppz_3687a9b9-879b-47e3-bc75-6a382ac0febe/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:46 crc kubenswrapper[5116]: I1212 16:17:46.984141 5116 generic.go:358] "Generic (PLEG): container finished" podID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" exitCode=137 Dec 12 16:17:46 crc kubenswrapper[5116]: I1212 16:17:46.984267 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" event={"ID":"3687a9b9-879b-47e3-bc75-6a382ac0febe","Type":"ContainerDied","Data":"bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347"} Dec 12 16:17:48 crc kubenswrapper[5116]: I1212 16:17:48.527827 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9th2f" Dec 12 16:17:52 crc kubenswrapper[5116]: E1212 16:17:52.589642 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347 is running failed: container process not found" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:52 crc kubenswrapper[5116]: E1212 16:17:52.591333 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347 is running failed: container process not found" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:52 crc kubenswrapper[5116]: E1212 16:17:52.591729 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347 is running failed: container process not found" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:52 crc kubenswrapper[5116]: E1212 16:17:52.591758 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:55 crc kubenswrapper[5116]: I1212 16:17:55.108487 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35218: no serving certificate available for the kubelet" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.068341 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.071939 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f22a301e-a8e4-44e7-825d-63df4c3cc031" containerName="pruner" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.071981 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22a301e-a8e4-44e7-825d-63df4c3cc031" containerName="pruner" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.072158 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f22a301e-a8e4-44e7-825d-63df4c3cc031" containerName="pruner" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.337401 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.337665 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.427854 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.428443 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.448388 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.529326 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir\") pod \"0cf5962e-a354-421c-b535-0e905c73d5b1\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.529531 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access\") pod \"0cf5962e-a354-421c-b535-0e905c73d5b1\" (UID: \"0cf5962e-a354-421c-b535-0e905c73d5b1\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.529807 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.529777 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0cf5962e-a354-421c-b535-0e905c73d5b1" (UID: "0cf5962e-a354-421c-b535-0e905c73d5b1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.529915 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.530132 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.530384 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf5962e-a354-421c-b535-0e905c73d5b1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.542445 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0cf5962e-a354-421c-b535-0e905c73d5b1" (UID: "0cf5962e-a354-421c-b535-0e905c73d5b1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.554242 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.609314 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-tbppz_3687a9b9-879b-47e3-bc75-6a382ac0febe/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.609425 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.631174 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf5962e-a354-421c-b535-0e905c73d5b1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.678949 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734089 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir\") pod \"3687a9b9-879b-47e3-bc75-6a382ac0febe\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734183 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "3687a9b9-879b-47e3-bc75-6a382ac0febe" (UID: "3687a9b9-879b-47e3-bc75-6a382ac0febe"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734417 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready\") pod \"3687a9b9-879b-47e3-bc75-6a382ac0febe\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734458 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6cq\" (UniqueName: \"kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq\") pod \"3687a9b9-879b-47e3-bc75-6a382ac0febe\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734506 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist\") pod \"3687a9b9-879b-47e3-bc75-6a382ac0febe\" (UID: \"3687a9b9-879b-47e3-bc75-6a382ac0febe\") " Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.734791 5116 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3687a9b9-879b-47e3-bc75-6a382ac0febe-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.735074 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready" (OuterVolumeSpecName: "ready") pod "3687a9b9-879b-47e3-bc75-6a382ac0febe" (UID: "3687a9b9-879b-47e3-bc75-6a382ac0febe"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.735777 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "3687a9b9-879b-47e3-bc75-6a382ac0febe" (UID: "3687a9b9-879b-47e3-bc75-6a382ac0febe"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.742870 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq" (OuterVolumeSpecName: "kube-api-access-5r6cq") pod "3687a9b9-879b-47e3-bc75-6a382ac0febe" (UID: "3687a9b9-879b-47e3-bc75-6a382ac0febe"). InnerVolumeSpecName "kube-api-access-5r6cq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.835922 5116 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3687a9b9-879b-47e3-bc75-6a382ac0febe-ready\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.836287 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5r6cq\" (UniqueName: \"kubernetes.io/projected/3687a9b9-879b-47e3-bc75-6a382ac0febe-kube-api-access-5r6cq\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:56 crc kubenswrapper[5116]: I1212 16:17:56.836303 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3687a9b9-879b-47e3-bc75-6a382ac0febe-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.088564 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerStarted","Data":"30f32c8722a533426995f2927259e8ff56e3244e5c120a13657d13845c56087b"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.093925 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbh7p" event={"ID":"eb955636-d9f0-41af-b498-6d380bb8ad2f","Type":"ContainerStarted","Data":"48fce52e0cda9af1651275454b152991a2edf7bef14de5225698d84e3ffd48df"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.098987 5116 generic.go:358] "Generic (PLEG): container finished" podID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerID="0ed3a8424592f1e51484b9cc5a3a649e02ad5f307242f6f96875a3738e89ce62" exitCode=0 Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.099086 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerDied","Data":"0ed3a8424592f1e51484b9cc5a3a649e02ad5f307242f6f96875a3738e89ce62"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.102029 5116 generic.go:358] "Generic (PLEG): container finished" podID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerID="9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c" exitCode=0 Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.102093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerDied","Data":"9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.105060 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-tbppz_3687a9b9-879b-47e3-bc75-6a382ac0febe/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.105256 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" event={"ID":"3687a9b9-879b-47e3-bc75-6a382ac0febe","Type":"ContainerDied","Data":"3681baf2276e85cb25a037562978d2f5efdde295970d2d50c559d17be5687927"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.105310 5116 scope.go:117] "RemoveContainer" containerID="bce287b7a5531cd2d1b28601dc6febaa988a4184d1af9855eddb2d999def6347" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.105497 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tbppz" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.130101 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.130490 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0cf5962e-a354-421c-b535-0e905c73d5b1","Type":"ContainerDied","Data":"32f2c6743030a7143fe88237b605db2457ffd1494eb0c59b29385afc1b910e6f"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.130537 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f2c6743030a7143fe88237b605db2457ffd1494eb0c59b29385afc1b910e6f" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.142272 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerStarted","Data":"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.145899 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerStarted","Data":"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168"} Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.161703 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gbh7p" podStartSLOduration=151.161684023 podStartE2EDuration="2m31.161684023s" podCreationTimestamp="2025-12-12 16:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:57.154636032 +0000 UTC m=+171.618847798" watchObservedRunningTime="2025-12-12 16:17:57.161684023 +0000 UTC m=+171.625895779" Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.244291 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.303766 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tbppz"] Dec 12 16:17:57 crc kubenswrapper[5116]: I1212 16:17:57.306883 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tbppz"] Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.086269 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" path="/var/lib/kubelet/pods/3687a9b9-879b-47e3-bc75-6a382ac0febe/volumes" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.170332 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de1419aa-94f4-4b0d-9c00-a97de0d8f068","Type":"ContainerStarted","Data":"45ab036a2d302a014891d332f853a800f97e9ab5130fcd29bd5edb71ecfc80e3"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.170398 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de1419aa-94f4-4b0d-9c00-a97de0d8f068","Type":"ContainerStarted","Data":"96bc753290c5cb20f96ca6f629534c81ec184468fcbfc382d7dad136debacb33"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.173376 5116 generic.go:358] "Generic (PLEG): container finished" podID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerID="5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.173465 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerDied","Data":"5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.175208 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerID="503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.175284 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerDied","Data":"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.180514 5116 generic.go:358] "Generic (PLEG): container finished" podID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerID="7c73dccad82ac7bc6cf7a4e63da5d0988b5ff7b3988d24f325e677b6c9c8e2e3" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.180625 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerDied","Data":"7c73dccad82ac7bc6cf7a4e63da5d0988b5ff7b3988d24f325e677b6c9c8e2e3"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.183310 5116 generic.go:358] "Generic (PLEG): container finished" podID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerID="30f32c8722a533426995f2927259e8ff56e3244e5c120a13657d13845c56087b" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.183398 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerDied","Data":"30f32c8722a533426995f2927259e8ff56e3244e5c120a13657d13845c56087b"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.186007 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerStarted","Data":"d37933b1e55e27c3b856d3149346a44345d597dc3b2f31b5a9382dacc6d1593c"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.212622 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerStarted","Data":"acbbf132b10fd6177304754c4ca9dfb250a51d0d1770e488222e02c711d3150e"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.233916 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.233889465 podStartE2EDuration="2.233889465s" podCreationTimestamp="2025-12-12 16:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:58.20858118 +0000 UTC m=+172.672792936" watchObservedRunningTime="2025-12-12 16:17:58.233889465 +0000 UTC m=+172.698101221" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.234248 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerStarted","Data":"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f"} Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.393676 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2qt2j" podStartSLOduration=4.406032237 podStartE2EDuration="40.3936516s" podCreationTimestamp="2025-12-12 16:17:18 +0000 UTC" firstStartedPulling="2025-12-12 16:17:20.648639448 +0000 UTC m=+135.112851194" lastFinishedPulling="2025-12-12 16:17:56.636258801 +0000 UTC m=+171.100470557" observedRunningTime="2025-12-12 16:17:58.391064469 +0000 UTC m=+172.855276235" watchObservedRunningTime="2025-12-12 16:17:58.3936516 +0000 UTC m=+172.857863356" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.415747 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wh7sg" podStartSLOduration=4.416711446 podStartE2EDuration="40.415728877s" podCreationTimestamp="2025-12-12 16:17:18 +0000 UTC" firstStartedPulling="2025-12-12 16:17:20.637058155 +0000 UTC m=+135.101269911" lastFinishedPulling="2025-12-12 16:17:56.636075596 +0000 UTC m=+171.100287342" observedRunningTime="2025-12-12 16:17:58.414367831 +0000 UTC m=+172.878579587" watchObservedRunningTime="2025-12-12 16:17:58.415728877 +0000 UTC m=+172.879940623" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.874639 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.875039 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:17:58 crc kubenswrapper[5116]: I1212 16:17:58.889222 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.172828 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.172898 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.246453 5116 generic.go:358] "Generic (PLEG): container finished" podID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerID="d37933b1e55e27c3b856d3149346a44345d597dc3b2f31b5a9382dacc6d1593c" exitCode=0 Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.246502 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerDied","Data":"d37933b1e55e27c3b856d3149346a44345d597dc3b2f31b5a9382dacc6d1593c"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.249206 5116 generic.go:358] "Generic (PLEG): container finished" podID="de1419aa-94f4-4b0d-9c00-a97de0d8f068" containerID="45ab036a2d302a014891d332f853a800f97e9ab5130fcd29bd5edb71ecfc80e3" exitCode=0 Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.249447 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de1419aa-94f4-4b0d-9c00-a97de0d8f068","Type":"ContainerDied","Data":"45ab036a2d302a014891d332f853a800f97e9ab5130fcd29bd5edb71ecfc80e3"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.255371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerStarted","Data":"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.259061 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerStarted","Data":"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.260868 5116 generic.go:358] "Generic (PLEG): container finished" podID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerID="466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168" exitCode=0 Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.260934 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerDied","Data":"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.268390 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerStarted","Data":"99a510b3ecda6b315e777cddad32848b073feddc64575974aaab8bba461086f4"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.274190 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerStarted","Data":"994b482ce64b4373fb164f011f9819d3da33b86ffa9f3898a5f50f515da3d9ac"} Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.309821 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vz4rg" podStartSLOduration=6.232234643 podStartE2EDuration="43.309798778s" podCreationTimestamp="2025-12-12 16:17:16 +0000 UTC" firstStartedPulling="2025-12-12 16:17:19.560033782 +0000 UTC m=+134.024245538" lastFinishedPulling="2025-12-12 16:17:56.637597907 +0000 UTC m=+171.101809673" observedRunningTime="2025-12-12 16:17:59.309026738 +0000 UTC m=+173.773238504" watchObservedRunningTime="2025-12-12 16:17:59.309798778 +0000 UTC m=+173.774010544" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.336006 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zt54g" podStartSLOduration=6.231653599 podStartE2EDuration="43.335986958s" podCreationTimestamp="2025-12-12 16:17:16 +0000 UTC" firstStartedPulling="2025-12-12 16:17:19.530911474 +0000 UTC m=+133.995123230" lastFinishedPulling="2025-12-12 16:17:56.635244833 +0000 UTC m=+171.099456589" observedRunningTime="2025-12-12 16:17:59.33461389 +0000 UTC m=+173.798825656" watchObservedRunningTime="2025-12-12 16:17:59.335986958 +0000 UTC m=+173.800198714" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.380617 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mksww" podStartSLOduration=6.291659853 podStartE2EDuration="43.380595125s" podCreationTimestamp="2025-12-12 16:17:16 +0000 UTC" firstStartedPulling="2025-12-12 16:17:19.537580925 +0000 UTC m=+134.001792671" lastFinishedPulling="2025-12-12 16:17:56.626516187 +0000 UTC m=+171.090727943" observedRunningTime="2025-12-12 16:17:59.379041183 +0000 UTC m=+173.843252949" watchObservedRunningTime="2025-12-12 16:17:59.380595125 +0000 UTC m=+173.844806881" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.407726 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qxjm" podStartSLOduration=6.316996319 podStartE2EDuration="43.407702099s" podCreationTimestamp="2025-12-12 16:17:16 +0000 UTC" firstStartedPulling="2025-12-12 16:17:19.567159725 +0000 UTC m=+134.031371481" lastFinishedPulling="2025-12-12 16:17:56.657865505 +0000 UTC m=+171.122077261" observedRunningTime="2025-12-12 16:17:59.40550374 +0000 UTC m=+173.869715506" watchObservedRunningTime="2025-12-12 16:17:59.407702099 +0000 UTC m=+173.871913855" Dec 12 16:17:59 crc kubenswrapper[5116]: I1212 16:17:59.977620 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2qt2j" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="registry-server" probeResult="failure" output=< Dec 12 16:17:59 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 12 16:17:59 crc kubenswrapper[5116]: > Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.223071 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wh7sg" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="registry-server" probeResult="failure" output=< Dec 12 16:18:00 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 12 16:18:00 crc kubenswrapper[5116]: > Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.287707 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerStarted","Data":"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b"} Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.291396 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerStarted","Data":"820534cb22268f9c7c9d1c6ad4c332b9ddec61bdc00b88842ee0486eeb6ad416"} Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.314041 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmzmp" podStartSLOduration=5.345114666 podStartE2EDuration="41.314020352s" podCreationTimestamp="2025-12-12 16:17:19 +0000 UTC" firstStartedPulling="2025-12-12 16:17:20.666939184 +0000 UTC m=+135.131150940" lastFinishedPulling="2025-12-12 16:17:56.63584486 +0000 UTC m=+171.100056626" observedRunningTime="2025-12-12 16:18:00.308327558 +0000 UTC m=+174.772539314" watchObservedRunningTime="2025-12-12 16:18:00.314020352 +0000 UTC m=+174.778232108" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.341439 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bnzrx" podStartSLOduration=6.291171655 podStartE2EDuration="41.341414223s" podCreationTimestamp="2025-12-12 16:17:19 +0000 UTC" firstStartedPulling="2025-12-12 16:17:21.697013687 +0000 UTC m=+136.161225443" lastFinishedPulling="2025-12-12 16:17:56.747256255 +0000 UTC m=+171.211468011" observedRunningTime="2025-12-12 16:18:00.337747864 +0000 UTC m=+174.801959620" watchObservedRunningTime="2025-12-12 16:18:00.341414223 +0000 UTC m=+174.805625979" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.686422 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.813249 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir\") pod \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.813431 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access\") pod \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\" (UID: \"de1419aa-94f4-4b0d-9c00-a97de0d8f068\") " Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.813764 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "de1419aa-94f4-4b0d-9c00-a97de0d8f068" (UID: "de1419aa-94f4-4b0d-9c00-a97de0d8f068"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.832334 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "de1419aa-94f4-4b0d-9c00-a97de0d8f068" (UID: "de1419aa-94f4-4b0d-9c00-a97de0d8f068"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.914992 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:00 crc kubenswrapper[5116]: I1212 16:18:00.915054 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1419aa-94f4-4b0d-9c00-a97de0d8f068-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:01 crc kubenswrapper[5116]: I1212 16:18:01.300633 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de1419aa-94f4-4b0d-9c00-a97de0d8f068","Type":"ContainerDied","Data":"96bc753290c5cb20f96ca6f629534c81ec184468fcbfc382d7dad136debacb33"} Dec 12 16:18:01 crc kubenswrapper[5116]: I1212 16:18:01.300702 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96bc753290c5cb20f96ca6f629534c81ec184468fcbfc382d7dad136debacb33" Dec 12 16:18:01 crc kubenswrapper[5116]: I1212 16:18:01.300658 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.062914 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063543 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0cf5962e-a354-421c-b535-0e905c73d5b1" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063564 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5962e-a354-421c-b535-0e905c73d5b1" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063590 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063598 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063611 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de1419aa-94f4-4b0d-9c00-a97de0d8f068" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063632 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1419aa-94f4-4b0d-9c00-a97de0d8f068" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063733 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3687a9b9-879b-47e3-bc75-6a382ac0febe" containerName="kube-multus-additional-cni-plugins" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063744 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="de1419aa-94f4-4b0d-9c00-a97de0d8f068" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.063756 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0cf5962e-a354-421c-b535-0e905c73d5b1" containerName="pruner" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.664321 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.664540 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.667201 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.668422 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.740445 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.740674 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.740980 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.842070 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.842404 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.842459 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.842219 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.842646 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.863758 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access\") pod \"installer-12-crc\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:02 crc kubenswrapper[5116]: I1212 16:18:02.984717 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:03 crc kubenswrapper[5116]: I1212 16:18:03.437638 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:18:03 crc kubenswrapper[5116]: W1212 16:18:03.445358 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd1d3c81b_ff79_401d_a4b6_8098265e5534.slice/crio-3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb WatchSource:0}: Error finding container 3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb: Status 404 returned error can't find the container with id 3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb Dec 12 16:18:04 crc kubenswrapper[5116]: I1212 16:18:04.317964 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d1d3c81b-ff79-401d-a4b6-8098265e5534","Type":"ContainerStarted","Data":"3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb"} Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.519862 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.520432 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.577810 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.687947 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.688012 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.741410 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.924656 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.924723 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:06 crc kubenswrapper[5116]: I1212 16:18:06.968228 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:07 crc kubenswrapper[5116]: I1212 16:18:07.381481 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:18:07 crc kubenswrapper[5116]: I1212 16:18:07.382703 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:18:07 crc kubenswrapper[5116]: I1212 16:18:07.405469 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:08 crc kubenswrapper[5116]: I1212 16:18:08.345533 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:08 crc kubenswrapper[5116]: I1212 16:18:08.345638 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:08 crc kubenswrapper[5116]: I1212 16:18:08.386450 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:08 crc kubenswrapper[5116]: I1212 16:18:08.912356 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:18:08 crc kubenswrapper[5116]: I1212 16:18:08.949846 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.120150 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.213189 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.264801 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.352416 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d1d3c81b-ff79-401d-a4b6-8098265e5534","Type":"ContainerStarted","Data":"dc52278440b79d3b1339056b26d18ae8a8f20e98cba4bda5b7abda41fe160df0"} Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.353010 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qxjm" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="registry-server" containerID="cri-o://994b482ce64b4373fb164f011f9819d3da33b86ffa9f3898a5f50f515da3d9ac" gracePeriod=2 Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.372064 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=7.372046136 podStartE2EDuration="7.372046136s" podCreationTimestamp="2025-12-12 16:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:09.371292585 +0000 UTC m=+183.835504371" watchObservedRunningTime="2025-12-12 16:18:09.372046136 +0000 UTC m=+183.836257892" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.404530 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.919651 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.919904 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:18:09 crc kubenswrapper[5116]: I1212 16:18:09.956914 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:18:10 crc kubenswrapper[5116]: I1212 16:18:10.291350 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:10 crc kubenswrapper[5116]: I1212 16:18:10.291505 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:10 crc kubenswrapper[5116]: I1212 16:18:10.345010 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:10 crc kubenswrapper[5116]: I1212 16:18:10.401373 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:18:10 crc kubenswrapper[5116]: I1212 16:18:10.406880 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:11 crc kubenswrapper[5116]: I1212 16:18:11.519257 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:18:11 crc kubenswrapper[5116]: I1212 16:18:11.519550 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wh7sg" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="registry-server" containerID="cri-o://acbbf132b10fd6177304754c4ca9dfb250a51d0d1770e488222e02c711d3150e" gracePeriod=2 Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.378770 5116 generic.go:358] "Generic (PLEG): container finished" podID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerID="994b482ce64b4373fb164f011f9819d3da33b86ffa9f3898a5f50f515da3d9ac" exitCode=0 Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.378851 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerDied","Data":"994b482ce64b4373fb164f011f9819d3da33b86ffa9f3898a5f50f515da3d9ac"} Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.381677 5116 generic.go:358] "Generic (PLEG): container finished" podID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerID="acbbf132b10fd6177304754c4ca9dfb250a51d0d1770e488222e02c711d3150e" exitCode=0 Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.381718 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerDied","Data":"acbbf132b10fd6177304754c4ca9dfb250a51d0d1770e488222e02c711d3150e"} Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.875322 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.909742 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content\") pod \"3f797462-5a8d-4865-ac29-f49ef38173d2\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.909802 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities\") pod \"3f797462-5a8d-4865-ac29-f49ef38173d2\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.909839 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wzbj\" (UniqueName: \"kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj\") pod \"3f797462-5a8d-4865-ac29-f49ef38173d2\" (UID: \"3f797462-5a8d-4865-ac29-f49ef38173d2\") " Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.911206 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities" (OuterVolumeSpecName: "utilities") pod "3f797462-5a8d-4865-ac29-f49ef38173d2" (UID: "3f797462-5a8d-4865-ac29-f49ef38173d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.923731 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.924229 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vz4rg" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="registry-server" containerID="cri-o://99a510b3ecda6b315e777cddad32848b073feddc64575974aaab8bba461086f4" gracePeriod=2 Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.929437 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj" (OuterVolumeSpecName: "kube-api-access-2wzbj") pod "3f797462-5a8d-4865-ac29-f49ef38173d2" (UID: "3f797462-5a8d-4865-ac29-f49ef38173d2"). InnerVolumeSpecName "kube-api-access-2wzbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.954902 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f797462-5a8d-4865-ac29-f49ef38173d2" (UID: "3f797462-5a8d-4865-ac29-f49ef38173d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:13 crc kubenswrapper[5116]: I1212 16:18:13.975973 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.010720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content\") pod \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.011208 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.011228 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f797462-5a8d-4865-ac29-f49ef38173d2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.011237 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2wzbj\" (UniqueName: \"kubernetes.io/projected/3f797462-5a8d-4865-ac29-f49ef38173d2-kube-api-access-2wzbj\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.022892 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "391fbeb8-9f81-40ca-b1f9-5bb977066fa7" (UID: "391fbeb8-9f81-40ca-b1f9-5bb977066fa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.112328 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcnb5\" (UniqueName: \"kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5\") pod \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.112418 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities\") pod \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\" (UID: \"391fbeb8-9f81-40ca-b1f9-5bb977066fa7\") " Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.113817 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities" (OuterVolumeSpecName: "utilities") pod "391fbeb8-9f81-40ca-b1f9-5bb977066fa7" (UID: "391fbeb8-9f81-40ca-b1f9-5bb977066fa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.114209 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.114242 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.118216 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5" (OuterVolumeSpecName: "kube-api-access-fcnb5") pod "391fbeb8-9f81-40ca-b1f9-5bb977066fa7" (UID: "391fbeb8-9f81-40ca-b1f9-5bb977066fa7"). InnerVolumeSpecName "kube-api-access-fcnb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.124156 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.124576 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bnzrx" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="registry-server" containerID="cri-o://820534cb22268f9c7c9d1c6ad4c332b9ddec61bdc00b88842ee0486eeb6ad416" gracePeriod=2 Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.215540 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcnb5\" (UniqueName: \"kubernetes.io/projected/391fbeb8-9f81-40ca-b1f9-5bb977066fa7-kube-api-access-fcnb5\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.389990 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qxjm" event={"ID":"3f797462-5a8d-4865-ac29-f49ef38173d2","Type":"ContainerDied","Data":"d88ac35a9d4eb62d7533935d22e8145aedb9a517a24260361616215caff3e655"} Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.390323 5116 scope.go:117] "RemoveContainer" containerID="994b482ce64b4373fb164f011f9819d3da33b86ffa9f3898a5f50f515da3d9ac" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.390698 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qxjm" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.402698 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh7sg" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.403347 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh7sg" event={"ID":"391fbeb8-9f81-40ca-b1f9-5bb977066fa7","Type":"ContainerDied","Data":"8e8d1062638fbc844b83c335679cb633b8a3f091c3bdceca41e7d9a7389ab918"} Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.415178 5116 scope.go:117] "RemoveContainer" containerID="30f32c8722a533426995f2927259e8ff56e3244e5c120a13657d13845c56087b" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.416848 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.423438 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qxjm"] Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.438083 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.438386 5116 scope.go:117] "RemoveContainer" containerID="3945fc05432fd9da74da13d37e668b037fdb76e934c58da3e0a8316fc2d8064d" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.441627 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh7sg"] Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.457246 5116 scope.go:117] "RemoveContainer" containerID="acbbf132b10fd6177304754c4ca9dfb250a51d0d1770e488222e02c711d3150e" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.473929 5116 scope.go:117] "RemoveContainer" containerID="0ed3a8424592f1e51484b9cc5a3a649e02ad5f307242f6f96875a3738e89ce62" Dec 12 16:18:14 crc kubenswrapper[5116]: I1212 16:18:14.493429 5116 scope.go:117] "RemoveContainer" containerID="60aacfb9aae3138ed46a183ad77b3979bb962bfe60c4437b2347caa6ff6fbdf0" Dec 12 16:18:16 crc kubenswrapper[5116]: I1212 16:18:16.054907 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" path="/var/lib/kubelet/pods/391fbeb8-9f81-40ca-b1f9-5bb977066fa7/volumes" Dec 12 16:18:16 crc kubenswrapper[5116]: I1212 16:18:16.055667 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" path="/var/lib/kubelet/pods/3f797462-5a8d-4865-ac29-f49ef38173d2/volumes" Dec 12 16:18:17 crc kubenswrapper[5116]: I1212 16:18:17.425542 5116 generic.go:358] "Generic (PLEG): container finished" podID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerID="99a510b3ecda6b315e777cddad32848b073feddc64575974aaab8bba461086f4" exitCode=0 Dec 12 16:18:17 crc kubenswrapper[5116]: I1212 16:18:17.425600 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerDied","Data":"99a510b3ecda6b315e777cddad32848b073feddc64575974aaab8bba461086f4"} Dec 12 16:18:18 crc kubenswrapper[5116]: I1212 16:18:18.434940 5116 generic.go:358] "Generic (PLEG): container finished" podID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerID="820534cb22268f9c7c9d1c6ad4c332b9ddec61bdc00b88842ee0486eeb6ad416" exitCode=0 Dec 12 16:18:18 crc kubenswrapper[5116]: I1212 16:18:18.435033 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerDied","Data":"820534cb22268f9c7c9d1c6ad4c332b9ddec61bdc00b88842ee0486eeb6ad416"} Dec 12 16:18:18 crc kubenswrapper[5116]: I1212 16:18:18.990967 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.085988 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content\") pod \"fc7f231b-6d94-4157-8f03-efca4baf4da2\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.086146 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities\") pod \"fc7f231b-6d94-4157-8f03-efca4baf4da2\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.086172 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-822mk\" (UniqueName: \"kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk\") pod \"fc7f231b-6d94-4157-8f03-efca4baf4da2\" (UID: \"fc7f231b-6d94-4157-8f03-efca4baf4da2\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.088009 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities" (OuterVolumeSpecName: "utilities") pod "fc7f231b-6d94-4157-8f03-efca4baf4da2" (UID: "fc7f231b-6d94-4157-8f03-efca4baf4da2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.093523 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk" (OuterVolumeSpecName: "kube-api-access-822mk") pod "fc7f231b-6d94-4157-8f03-efca4baf4da2" (UID: "fc7f231b-6d94-4157-8f03-efca4baf4da2"). InnerVolumeSpecName "kube-api-access-822mk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.142177 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc7f231b-6d94-4157-8f03-efca4baf4da2" (UID: "fc7f231b-6d94-4157-8f03-efca4baf4da2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.187518 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.187559 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc7f231b-6d94-4157-8f03-efca4baf4da2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.187574 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-822mk\" (UniqueName: \"kubernetes.io/projected/fc7f231b-6d94-4157-8f03-efca4baf4da2-kube-api-access-822mk\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.339504 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.389693 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities\") pod \"aed82316-dc90-4d53-bffe-b135a7ebf47d\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.389769 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxglf\" (UniqueName: \"kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf\") pod \"aed82316-dc90-4d53-bffe-b135a7ebf47d\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.389886 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content\") pod \"aed82316-dc90-4d53-bffe-b135a7ebf47d\" (UID: \"aed82316-dc90-4d53-bffe-b135a7ebf47d\") " Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.390700 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities" (OuterVolumeSpecName: "utilities") pod "aed82316-dc90-4d53-bffe-b135a7ebf47d" (UID: "aed82316-dc90-4d53-bffe-b135a7ebf47d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.394207 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf" (OuterVolumeSpecName: "kube-api-access-sxglf") pod "aed82316-dc90-4d53-bffe-b135a7ebf47d" (UID: "aed82316-dc90-4d53-bffe-b135a7ebf47d"). InnerVolumeSpecName "kube-api-access-sxglf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.442842 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4rg" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.442803 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4rg" event={"ID":"fc7f231b-6d94-4157-8f03-efca4baf4da2","Type":"ContainerDied","Data":"cd5f4821ccf74098c162143faf75954decdd6ae9f9bb469158b98fdd2fb6f610"} Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.443378 5116 scope.go:117] "RemoveContainer" containerID="99a510b3ecda6b315e777cddad32848b073feddc64575974aaab8bba461086f4" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.447911 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnzrx" event={"ID":"aed82316-dc90-4d53-bffe-b135a7ebf47d","Type":"ContainerDied","Data":"e85db79263519bb23075d0f1302325ad1beec8d93061c85834ae4c13d523f4cc"} Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.448009 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnzrx" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.468662 5116 scope.go:117] "RemoveContainer" containerID="7c73dccad82ac7bc6cf7a4e63da5d0988b5ff7b3988d24f325e677b6c9c8e2e3" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.472311 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.475689 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vz4rg"] Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.492159 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.492236 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxglf\" (UniqueName: \"kubernetes.io/projected/aed82316-dc90-4d53-bffe-b135a7ebf47d-kube-api-access-sxglf\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.496831 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aed82316-dc90-4d53-bffe-b135a7ebf47d" (UID: "aed82316-dc90-4d53-bffe-b135a7ebf47d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.501718 5116 scope.go:117] "RemoveContainer" containerID="d59b96084d881256cf230fb4c499ba0c551e68eac2147e9f64d58d73d6f3c162" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.516428 5116 scope.go:117] "RemoveContainer" containerID="820534cb22268f9c7c9d1c6ad4c332b9ddec61bdc00b88842ee0486eeb6ad416" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.537349 5116 scope.go:117] "RemoveContainer" containerID="d37933b1e55e27c3b856d3149346a44345d597dc3b2f31b5a9382dacc6d1593c" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.553766 5116 scope.go:117] "RemoveContainer" containerID="b3bbd5bdba90de48480f797099c9fcf8549d5ce6f53893d856ceaec95039dce0" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.593208 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed82316-dc90-4d53-bffe-b135a7ebf47d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.782930 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:18:19 crc kubenswrapper[5116]: I1212 16:18:19.785265 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bnzrx"] Dec 12 16:18:20 crc kubenswrapper[5116]: I1212 16:18:20.053246 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" path="/var/lib/kubelet/pods/aed82316-dc90-4d53-bffe-b135a7ebf47d/volumes" Dec 12 16:18:20 crc kubenswrapper[5116]: I1212 16:18:20.054038 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" path="/var/lib/kubelet/pods/fc7f231b-6d94-4157-8f03-efca4baf4da2/volumes" Dec 12 16:18:30 crc kubenswrapper[5116]: I1212 16:18:30.759068 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lw784"] Dec 12 16:18:36 crc kubenswrapper[5116]: I1212 16:18:36.101205 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59528: no serving certificate available for the kubelet" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.363612 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365158 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365187 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365210 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365222 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365235 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365247 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365272 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365284 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365313 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365325 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365345 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365357 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365376 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365390 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365409 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365420 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="extract-utilities" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365436 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365451 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="extract-content" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365465 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365477 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365491 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365502 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365524 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365535 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365717 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f797462-5a8d-4865-ac29-f49ef38173d2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365737 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="391fbeb8-9f81-40ca-b1f9-5bb977066fa7" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365759 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7f231b-6d94-4157-8f03-efca4baf4da2" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.365780 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="aed82316-dc90-4d53-bffe-b135a7ebf47d" containerName="registry-server" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.422165 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.422281 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.422421 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.423480 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea" gracePeriod=15 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.423785 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294" gracePeriod=15 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.423891 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919" gracePeriod=15 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.423975 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4" gracePeriod=15 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.424097 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9" gracePeriod=15 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425856 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425904 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425934 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425949 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425973 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.425988 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.426025 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.426043 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.428753 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432599 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432653 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432698 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432715 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432766 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432782 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432810 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.432825 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433229 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433267 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433292 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433313 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433333 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433357 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433561 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433824 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.433846 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.434335 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.509329 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: E1212 16:18:46.510315 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.538514 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.538906 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.539263 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.539438 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.539524 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641473 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641543 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641572 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641619 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641644 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641644 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641738 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641753 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641840 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.641993 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.642063 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.642080 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.642187 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.642247 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.647255 5116 generic.go:358] "Generic (PLEG): container finished" podID="d1d3c81b-ff79-401d-a4b6-8098265e5534" containerID="dc52278440b79d3b1339056b26d18ae8a8f20e98cba4bda5b7abda41fe160df0" exitCode=0 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.647360 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d1d3c81b-ff79-401d-a4b6-8098265e5534","Type":"ContainerDied","Data":"dc52278440b79d3b1339056b26d18ae8a8f20e98cba4bda5b7abda41fe160df0"} Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.649444 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.649789 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.650842 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.651717 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294" exitCode=0 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.651783 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919" exitCode=0 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.651795 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4" exitCode=0 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.651804 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9" exitCode=2 Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.651883 5116 scope.go:117] "RemoveContainer" containerID="9bd47ba55a3a5a9f48745ae0ebf7b0ab9d81fced62cf2690c52cfb6e41940a78" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743439 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743484 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743541 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743568 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743641 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743680 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.743715 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.744018 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.744082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: I1212 16:18:46.812030 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:46 crc kubenswrapper[5116]: E1212 16:18:46.841295 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880841ffc43e453 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,LastTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.659366 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b"} Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.659784 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"88d33ae781b91ec7758d945a6050dff244e6e5623ed77b1eac672b80cff71024"} Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.660183 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:47 crc kubenswrapper[5116]: E1212 16:18:47.660852 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.661575 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.664319 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.938247 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.938959 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.961564 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access\") pod \"d1d3c81b-ff79-401d-a4b6-8098265e5534\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.961689 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock\") pod \"d1d3c81b-ff79-401d-a4b6-8098265e5534\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.961802 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir\") pod \"d1d3c81b-ff79-401d-a4b6-8098265e5534\" (UID: \"d1d3c81b-ff79-401d-a4b6-8098265e5534\") " Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.961969 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock" (OuterVolumeSpecName: "var-lock") pod "d1d3c81b-ff79-401d-a4b6-8098265e5534" (UID: "d1d3c81b-ff79-401d-a4b6-8098265e5534"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.962078 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d1d3c81b-ff79-401d-a4b6-8098265e5534" (UID: "d1d3c81b-ff79-401d-a4b6-8098265e5534"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.962530 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.962566 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d3c81b-ff79-401d-a4b6-8098265e5534-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[5116]: I1212 16:18:47.969306 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d1d3c81b-ff79-401d-a4b6-8098265e5534" (UID: "d1d3c81b-ff79-401d-a4b6-8098265e5534"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.064645 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d3c81b-ff79-401d-a4b6-8098265e5534-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.699369 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d1d3c81b-ff79-401d-a4b6-8098265e5534","Type":"ContainerDied","Data":"3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb"} Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.699880 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d83fd67bd852d19a6e519290d9087eab91c0db43bb9d1fb172722e9d894b7eb" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.699479 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.705038 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.855407 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.856679 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.857506 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.858272 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880441 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880490 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880566 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880452 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880603 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880670 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.880665 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.881368 5116 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.881393 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.881404 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.881713 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.883448 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.982697 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:48 crc kubenswrapper[5116]: I1212 16:18:48.983226 5116 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.227422 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880841ffc43e453 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,LastTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.415960 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.416082 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.716521 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.717486 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea" exitCode=0 Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.717580 5116 scope.go:117] "RemoveContainer" containerID="b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.717695 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.738343 5116 scope.go:117] "RemoveContainer" containerID="eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.755821 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.756402 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.766275 5116 scope.go:117] "RemoveContainer" containerID="068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.785842 5116 scope.go:117] "RemoveContainer" containerID="c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.801785 5116 scope.go:117] "RemoveContainer" containerID="3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.822586 5116 scope.go:117] "RemoveContainer" containerID="66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.883091 5116 scope.go:117] "RemoveContainer" containerID="b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.883736 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\": container with ID starting with b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294 not found: ID does not exist" containerID="b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.883854 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294"} err="failed to get container status \"b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\": rpc error: code = NotFound desc = could not find container \"b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294\": container with ID starting with b3ec8eefae18a3dcee9b9b199e384dd28947fb2f9730a313468bf5eac4472294 not found: ID does not exist" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.883929 5116 scope.go:117] "RemoveContainer" containerID="eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.884246 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\": container with ID starting with eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919 not found: ID does not exist" containerID="eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.884273 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919"} err="failed to get container status \"eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\": rpc error: code = NotFound desc = could not find container \"eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919\": container with ID starting with eda2c029c3849c00040b20c9dee2d67c914e521ffd6c8c0966bb7668e954a919 not found: ID does not exist" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.884290 5116 scope.go:117] "RemoveContainer" containerID="068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.884537 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\": container with ID starting with 068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4 not found: ID does not exist" containerID="068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.884558 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4"} err="failed to get container status \"068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\": rpc error: code = NotFound desc = could not find container \"068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4\": container with ID starting with 068a652527ebf4ce029e02a0dc5c44656dbbabf347de3654f7208c747b3883b4 not found: ID does not exist" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.884576 5116 scope.go:117] "RemoveContainer" containerID="c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.884967 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\": container with ID starting with c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9 not found: ID does not exist" containerID="c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.884990 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9"} err="failed to get container status \"c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\": rpc error: code = NotFound desc = could not find container \"c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9\": container with ID starting with c642f324bea0af4215f7880b5858ad82f6cb3fb53646709c8a2a100a864958e9 not found: ID does not exist" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.885004 5116 scope.go:117] "RemoveContainer" containerID="3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.885435 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\": container with ID starting with 3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea not found: ID does not exist" containerID="3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.885456 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea"} err="failed to get container status \"3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\": rpc error: code = NotFound desc = could not find container \"3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea\": container with ID starting with 3a7ef5b97d595aee19bffc78c0b5c37a650b579afa23c0c6f2c3ff36994d57ea not found: ID does not exist" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.885470 5116 scope.go:117] "RemoveContainer" containerID="66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949" Dec 12 16:18:49 crc kubenswrapper[5116]: E1212 16:18:49.885654 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\": container with ID starting with 66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949 not found: ID does not exist" containerID="66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949" Dec 12 16:18:49 crc kubenswrapper[5116]: I1212 16:18:49.885702 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949"} err="failed to get container status \"66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\": rpc error: code = NotFound desc = could not find container \"66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949\": container with ID starting with 66d076e70740e1ba20bdace97ca4d572827ed80a0049d1b7829adf23767d6949 not found: ID does not exist" Dec 12 16:18:50 crc kubenswrapper[5116]: I1212 16:18:50.062452 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.089013 5116 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" volumeName="registry-storage" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.895961 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.896935 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.897685 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.898634 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.899136 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:50 crc kubenswrapper[5116]: I1212 16:18:50.899189 5116 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 16:18:50 crc kubenswrapper[5116]: E1212 16:18:50.899603 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="200ms" Dec 12 16:18:51 crc kubenswrapper[5116]: E1212 16:18:51.100793 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="400ms" Dec 12 16:18:51 crc kubenswrapper[5116]: E1212 16:18:51.502641 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="800ms" Dec 12 16:18:52 crc kubenswrapper[5116]: E1212 16:18:52.303826 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="1.6s" Dec 12 16:18:53 crc kubenswrapper[5116]: E1212 16:18:53.905260 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="3.2s" Dec 12 16:18:55 crc kubenswrapper[5116]: I1212 16:18:55.799564 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" containerName="oauth-openshift" containerID="cri-o://46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f" gracePeriod=15 Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.050254 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.326749 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.327540 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.328126 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662471 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662587 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662635 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662751 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662813 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662866 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.662948 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663031 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663080 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663273 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663343 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663460 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663513 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.663575 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir\") pod \"41bfba7f-9125-4770-99ea-3b72ddc0173b\" (UID: \"41bfba7f-9125-4770-99ea-3b72ddc0173b\") " Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.664038 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.665331 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.664332 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.665823 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.666239 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.678231 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.680553 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.682522 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.683842 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.686121 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.692480 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.694719 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.696895 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.711134 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5" (OuterVolumeSpecName: "kube-api-access-8gwc5") pod "41bfba7f-9125-4770-99ea-3b72ddc0173b" (UID: "41bfba7f-9125-4770-99ea-3b72ddc0173b"). InnerVolumeSpecName "kube-api-access-8gwc5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765009 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765061 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765077 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765091 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765132 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765151 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765169 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41bfba7f-9125-4770-99ea-3b72ddc0173b-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765186 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765203 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765224 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765241 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gwc5\" (UniqueName: \"kubernetes.io/projected/41bfba7f-9125-4770-99ea-3b72ddc0173b-kube-api-access-8gwc5\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765257 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765272 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.765285 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41bfba7f-9125-4770-99ea-3b72ddc0173b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.779612 5116 generic.go:358] "Generic (PLEG): container finished" podID="41bfba7f-9125-4770-99ea-3b72ddc0173b" containerID="46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f" exitCode=0 Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.779884 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" event={"ID":"41bfba7f-9125-4770-99ea-3b72ddc0173b","Type":"ContainerDied","Data":"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f"} Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.779934 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" event={"ID":"41bfba7f-9125-4770-99ea-3b72ddc0173b","Type":"ContainerDied","Data":"6db635fef50e171c20119e96417f3465b6df6d09f87f1ff5b6f6eceea4e7e10d"} Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.779966 5116 scope.go:117] "RemoveContainer" containerID="46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.780186 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.782520 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.783073 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.801722 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.802312 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.808365 5116 scope.go:117] "RemoveContainer" containerID="46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f" Dec 12 16:18:56 crc kubenswrapper[5116]: E1212 16:18:56.808942 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f\": container with ID starting with 46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f not found: ID does not exist" containerID="46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f" Dec 12 16:18:56 crc kubenswrapper[5116]: I1212 16:18:56.808982 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f"} err="failed to get container status \"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f\": rpc error: code = NotFound desc = could not find container \"46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f\": container with ID starting with 46be383bef764a2e13481f07dc118dad3c64010485bc1ad83f7ca4d8b62fe11f not found: ID does not exist" Dec 12 16:18:57 crc kubenswrapper[5116]: E1212 16:18:57.107209 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="6.4s" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.067882 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.069630 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.069853 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.084269 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.084296 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:18:58 crc kubenswrapper[5116]: E1212 16:18:58.084726 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.084859 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:58 crc kubenswrapper[5116]: I1212 16:18:58.805873 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"99ffe3c32314740f9270b080d4b7aebd1b8de255e7695a831aa6a30e1d22c2b3"} Dec 12 16:18:59 crc kubenswrapper[5116]: E1212 16:18:59.229446 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880841ffc43e453 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,LastTimestamp:2025-12-12 16:18:46.840394835 +0000 UTC m=+221.304606591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.815660 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.815709 5116 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9" exitCode=1 Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.815804 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9"} Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.816403 5116 scope.go:117] "RemoveContainer" containerID="67fc488617b9c30b954fd7a3c9d1afc2abf6b0309339f0b3270292f884509cc9" Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.817221 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.817718 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:18:59 crc kubenswrapper[5116]: I1212 16:18:59.818202 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.833540 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.834287 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"130673c95da18397c65b152a95f800b1263a9dabe114fdaf2ee60e21fee1e163"} Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.835910 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.836720 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.837287 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.839356 5116 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="a327a7a0fa4bcdb61ea75da602279f2ee9996028e79e27384107cc089107db6a" exitCode=0 Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.839647 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"a327a7a0fa4bcdb61ea75da602279f2ee9996028e79e27384107cc089107db6a"} Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.839853 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.839895 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:00 crc kubenswrapper[5116]: E1212 16:19:00.840569 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.840694 5116 status_manager.go:895] "Failed to get status for pod" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" pod="openshift-authentication/oauth-openshift-66458b6674-lw784" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lw784\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.841266 5116 status_manager.go:895] "Failed to get status for pod" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:00 crc kubenswrapper[5116]: I1212 16:19:00.841804 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Dec 12 16:19:01 crc kubenswrapper[5116]: I1212 16:19:01.860956 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c599c5f52365133918f909042175b5473d940423d065de80463ec4419fead193"} Dec 12 16:19:02 crc kubenswrapper[5116]: I1212 16:19:02.874005 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"44d3cb538dce1bb05dbb79fe083ce30b25245c0d71de7575d554c49f82599268"} Dec 12 16:19:03 crc kubenswrapper[5116]: I1212 16:19:03.887169 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f90cd07fbab663a0cc23de174e7ab3a635b6500577966b81255dec7e63f8876d"} Dec 12 16:19:03 crc kubenswrapper[5116]: I1212 16:19:03.887561 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7db8b468e67d108c1a179b00a447dc88c1860ac055fd50ee355710f373bcecf5"} Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.898810 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6937cf9ef5b550b6488e259c9305746d679c458fc1edc00ed58f099f363568df"} Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.898952 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.899130 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.899223 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.908474 5116 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.908504 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.969869 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:19:04 crc kubenswrapper[5116]: I1212 16:19:04.974406 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:19:05 crc kubenswrapper[5116]: I1212 16:19:05.907073 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:19:05 crc kubenswrapper[5116]: I1212 16:19:05.907186 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:05 crc kubenswrapper[5116]: I1212 16:19:05.907210 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:06 crc kubenswrapper[5116]: I1212 16:19:06.063617 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="e58b8edb-6f21-4de6-a54f-2652ed182016" Dec 12 16:19:16 crc kubenswrapper[5116]: I1212 16:19:16.917244 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:19:17 crc kubenswrapper[5116]: I1212 16:19:17.645626 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:19:17 crc kubenswrapper[5116]: I1212 16:19:17.965830 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.289192 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.518196 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.572023 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.784880 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.893390 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 16:19:18 crc kubenswrapper[5116]: I1212 16:19:18.905820 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 16:19:19 crc kubenswrapper[5116]: I1212 16:19:19.334688 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 16:19:19 crc kubenswrapper[5116]: I1212 16:19:19.416224 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:19:19 crc kubenswrapper[5116]: I1212 16:19:19.416383 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.040870 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.045815 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.126707 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.629626 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.924270 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 16:19:20 crc kubenswrapper[5116]: I1212 16:19:20.980511 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.152100 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.164944 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.199223 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.250567 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.403564 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.408358 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.519197 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.537075 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.628790 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.651643 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.726466 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.739286 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.789740 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.797045 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.886038 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.922211 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 16:19:21 crc kubenswrapper[5116]: I1212 16:19:21.958526 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.229267 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234031 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-lw784"] Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234098 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-8d6d7544-9gbxz"] Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234798 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234859 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="053e68c6-626a-4d3a-9f34-a55711644dd4" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234914 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" containerName="installer" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234930 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" containerName="installer" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234943 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" containerName="oauth-openshift" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.234951 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" containerName="oauth-openshift" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.235115 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1d3c81b-ff79-401d-a4b6-8098265e5534" containerName="installer" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.235137 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" containerName="oauth-openshift" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.250086 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.250573 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.316384 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.322690 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.323052 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.325561 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.327426 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.327580 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.327712 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.327864 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.328129 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.329056 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.329826 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.330286 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.330452 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.330830 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.337277 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.337672 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.356806 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.357733 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.384319 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.384295136 podStartE2EDuration="18.384295136s" podCreationTimestamp="2025-12-12 16:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:22.36884025 +0000 UTC m=+256.833052016" watchObservedRunningTime="2025-12-12 16:19:22.384295136 +0000 UTC m=+256.848506892" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.386817 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8d6d7544-9gbxz"] Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.431467 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.519862 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.519922 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.519955 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-audit-policies\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520010 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520100 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-error\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520288 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-session\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520332 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520374 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520413 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520613 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520687 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28f074df-1514-4cf8-8765-5e3523342f2e-audit-dir\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520757 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-login\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520813 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jvt\" (UniqueName: \"kubernetes.io/projected/28f074df-1514-4cf8-8765-5e3523342f2e-kube-api-access-d2jvt\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.520920 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.527101 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.553511 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.585440 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622460 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622532 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28f074df-1514-4cf8-8765-5e3523342f2e-audit-dir\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622561 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-login\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622595 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jvt\" (UniqueName: \"kubernetes.io/projected/28f074df-1514-4cf8-8765-5e3523342f2e-kube-api-access-d2jvt\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622633 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622663 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28f074df-1514-4cf8-8765-5e3523342f2e-audit-dir\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622669 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622744 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622789 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-audit-policies\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622821 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622848 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-error\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622886 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-session\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622914 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622959 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.622988 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.623563 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.625488 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-audit-policies\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.625864 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.626537 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.631024 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-login\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.631348 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.631665 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.634491 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.636725 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-system-session\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.636766 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-error\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.637079 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.638539 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28f074df-1514-4cf8-8765-5e3523342f2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.645355 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jvt\" (UniqueName: \"kubernetes.io/projected/28f074df-1514-4cf8-8765-5e3523342f2e-kube-api-access-d2jvt\") pod \"oauth-openshift-8d6d7544-9gbxz\" (UID: \"28f074df-1514-4cf8-8765-5e3523342f2e\") " pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.647144 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.661500 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.790209 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.792240 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.795047 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.880587 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 16:19:22 crc kubenswrapper[5116]: I1212 16:19:22.901381 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8d6d7544-9gbxz"] Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.024504 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" event={"ID":"28f074df-1514-4cf8-8765-5e3523342f2e","Type":"ContainerStarted","Data":"135184d7a2d64ea43da12f6067ca09e932568ff5e80786838821ff1b23f8a678"} Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.080053 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.085963 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.086044 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.097074 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.153463 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.177779 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.191892 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.254981 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.320554 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.475962 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.483816 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.518783 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.550154 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.599799 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.621888 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.716607 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.737237 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.812474 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.883218 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.915411 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.947699 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.967256 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 16:19:23 crc kubenswrapper[5116]: I1212 16:19:23.972496 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.032793 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" event={"ID":"28f074df-1514-4cf8-8765-5e3523342f2e","Type":"ContainerStarted","Data":"a49eb71178f5d607803f6805217a38d69bbfc7338b2cae06cf2bb42c9de0e4ee"} Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.033016 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.040480 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.053488 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41bfba7f-9125-4770-99ea-3b72ddc0173b" path="/var/lib/kubelet/pods/41bfba7f-9125-4770-99ea-3b72ddc0173b/volumes" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.061630 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" podStartSLOduration=54.061602666 podStartE2EDuration="54.061602666s" podCreationTimestamp="2025-12-12 16:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:24.056725402 +0000 UTC m=+258.520937158" watchObservedRunningTime="2025-12-12 16:19:24.061602666 +0000 UTC m=+258.525814432" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.139583 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.160989 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.207040 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.245419 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.249408 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.278317 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.278358 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.425549 5116 patch_prober.go:28] interesting pod/oauth-openshift-8d6d7544-9gbxz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:47656->10.217.0.56:6443: read: connection reset by peer" start-of-body= Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.426043 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" podUID="28f074df-1514-4cf8-8765-5e3523342f2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:47656->10.217.0.56:6443: read: connection reset by peer" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.444890 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.464240 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.539524 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.604447 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.656281 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.700655 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.711593 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.764644 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.791944 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.916235 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.931605 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 16:19:24 crc kubenswrapper[5116]: I1212 16:19:24.977774 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.028251 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.043415 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.044514 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.044909 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.044956 5116 generic.go:358] "Generic (PLEG): container finished" podID="28f074df-1514-4cf8-8765-5e3523342f2e" containerID="a49eb71178f5d607803f6805217a38d69bbfc7338b2cae06cf2bb42c9de0e4ee" exitCode=255 Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.045294 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" event={"ID":"28f074df-1514-4cf8-8765-5e3523342f2e","Type":"ContainerDied","Data":"a49eb71178f5d607803f6805217a38d69bbfc7338b2cae06cf2bb42c9de0e4ee"} Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.046010 5116 scope.go:117] "RemoveContainer" containerID="a49eb71178f5d607803f6805217a38d69bbfc7338b2cae06cf2bb42c9de0e4ee" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.050505 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.103305 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.150594 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.176148 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.260062 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.438000 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.467336 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.552365 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.640604 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.801589 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.806185 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.906133 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.906399 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.935518 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 16:19:25 crc kubenswrapper[5116]: I1212 16:19:25.952276 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.016646 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.058424 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.058658 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" event={"ID":"28f074df-1514-4cf8-8765-5e3523342f2e","Type":"ContainerStarted","Data":"c4100c898c07c15f554e13d5432096c2d0b270347cc33fa42ba82078a7f20003"} Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.058823 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.064728 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8d6d7544-9gbxz" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.068720 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.077400 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.089390 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.221722 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.348886 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.378694 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.403663 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.419904 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.460905 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.513777 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.679237 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.733720 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.919597 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.987968 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 16:19:26 crc kubenswrapper[5116]: I1212 16:19:26.992851 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.003572 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.010453 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.011126 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.058073 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.059224 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.068371 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.109733 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.116272 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.226593 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.374568 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.425454 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.431434 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.458174 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.522197 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.538651 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.561823 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.597377 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.643557 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.683658 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.706572 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.745421 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.779481 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:27 crc kubenswrapper[5116]: I1212 16:19:27.902076 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.143535 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.161336 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.215481 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.273216 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.330721 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44606: no serving certificate available for the kubelet" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.372557 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.456408 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.499937 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.742953 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.743504 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.757882 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.785501 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.822975 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.839401 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.882002 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.906953 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.979464 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.983783 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 16:19:28 crc kubenswrapper[5116]: I1212 16:19:28.993797 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.055503 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.080554 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.109712 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.168065 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.204282 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.230187 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.272854 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.493642 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.525773 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.526194 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b" gracePeriod=5 Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.632770 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.636657 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.731887 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.810401 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.829088 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.859331 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.864014 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.891359 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.962000 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 16:19:29 crc kubenswrapper[5116]: I1212 16:19:29.963921 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.010135 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.166043 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.169550 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.212975 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.229382 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.286215 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.373393 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.482373 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.511170 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.558275 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.663661 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.750013 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.813624 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.862449 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.941809 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:19:30 crc kubenswrapper[5116]: I1212 16:19:30.975022 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.067994 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.129626 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.178547 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.194725 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.266212 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.293768 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.314582 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.334598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.348497 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.423359 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.431249 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.502274 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.572999 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.583650 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.597620 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.624522 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.711932 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.728376 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.728867 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.779737 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.794317 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 16:19:31 crc kubenswrapper[5116]: I1212 16:19:31.934880 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.062480 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.123651 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.375587 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.581934 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.776617 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.784499 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.798468 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 16:19:32 crc kubenswrapper[5116]: I1212 16:19:32.860711 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 16:19:33 crc kubenswrapper[5116]: I1212 16:19:33.098201 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 16:19:33 crc kubenswrapper[5116]: I1212 16:19:33.248025 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 16:19:33 crc kubenswrapper[5116]: I1212 16:19:33.493254 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 16:19:33 crc kubenswrapper[5116]: I1212 16:19:33.501532 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 16:19:33 crc kubenswrapper[5116]: I1212 16:19:33.627175 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.012791 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.046313 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.065339 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.239935 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.387011 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.456707 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.691663 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.691803 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.694209 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709202 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709336 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709401 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709494 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709568 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709638 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709644 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709754 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.709852 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.710384 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.710405 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.710415 5116 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.710424 5116 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.719584 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.752479 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 16:19:34 crc kubenswrapper[5116]: I1212 16:19:34.812011 5116 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.117760 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.117818 5116 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b" exitCode=137 Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.117941 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.117961 5116 scope.go:117] "RemoveContainer" containerID="eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.148446 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.149825 5116 scope.go:117] "RemoveContainer" containerID="eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b" Dec 12 16:19:35 crc kubenswrapper[5116]: E1212 16:19:35.150376 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b\": container with ID starting with eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b not found: ID does not exist" containerID="eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b" Dec 12 16:19:35 crc kubenswrapper[5116]: I1212 16:19:35.150427 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b"} err="failed to get container status \"eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b\": rpc error: code = NotFound desc = could not find container \"eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b\": container with ID starting with eb4bccb7b9f3c16462c3caa626d0eb6dbe396ea5577837e3aadeff9557cd0a7b not found: ID does not exist" Dec 12 16:19:36 crc kubenswrapper[5116]: I1212 16:19:36.052338 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 16:19:36 crc kubenswrapper[5116]: I1212 16:19:36.054324 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.106705 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.416847 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.417342 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.417419 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.418284 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:19:49 crc kubenswrapper[5116]: I1212 16:19:49.418452 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02" gracePeriod=600 Dec 12 16:19:50 crc kubenswrapper[5116]: I1212 16:19:50.223533 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02" exitCode=0 Dec 12 16:19:50 crc kubenswrapper[5116]: I1212 16:19:50.223636 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02"} Dec 12 16:19:50 crc kubenswrapper[5116]: I1212 16:19:50.224175 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed"} Dec 12 16:19:54 crc kubenswrapper[5116]: I1212 16:19:54.354363 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.521598 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.522238 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerName="controller-manager" containerID="cri-o://dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66" gracePeriod=30 Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.542201 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.542532 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" containerName="route-controller-manager" containerID="cri-o://8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6" gracePeriod=30 Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.939356 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.942965 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.995316 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.995998 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" containerName="route-controller-manager" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996021 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" containerName="route-controller-manager" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996033 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerName="controller-manager" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996043 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerName="controller-manager" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996064 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996070 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996197 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerName="controller-manager" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996214 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:55 crc kubenswrapper[5116]: I1212 16:19:55.996226 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" containerName="route-controller-manager" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.001712 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.008384 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.037173 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.040789 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060124 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca\") pod \"e21b028a-7b09-4b86-9712-63820ff56d55\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060176 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj7b8\" (UniqueName: \"kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060248 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060293 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config\") pod \"e21b028a-7b09-4b86-9712-63820ff56d55\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060325 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp\") pod \"e21b028a-7b09-4b86-9712-63820ff56d55\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.060816 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp" (OuterVolumeSpecName: "tmp") pod "e21b028a-7b09-4b86-9712-63820ff56d55" (UID: "e21b028a-7b09-4b86-9712-63820ff56d55"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061420 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061472 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr65s\" (UniqueName: \"kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s\") pod \"e21b028a-7b09-4b86-9712-63820ff56d55\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061511 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert\") pod \"e21b028a-7b09-4b86-9712-63820ff56d55\" (UID: \"e21b028a-7b09-4b86-9712-63820ff56d55\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061522 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config" (OuterVolumeSpecName: "config") pod "e21b028a-7b09-4b86-9712-63820ff56d55" (UID: "e21b028a-7b09-4b86-9712-63820ff56d55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061540 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061628 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061659 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config\") pod \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\" (UID: \"01eff4cc-010a-4ba2-87a4-2dd5850dab4b\") " Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061783 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061825 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061933 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061970 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.061997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tqc\" (UniqueName: \"kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.062036 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.062091 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.062123 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e21b028a-7b09-4b86-9712-63820ff56d55-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.062298 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca" (OuterVolumeSpecName: "client-ca") pod "e21b028a-7b09-4b86-9712-63820ff56d55" (UID: "e21b028a-7b09-4b86-9712-63820ff56d55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.062505 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp" (OuterVolumeSpecName: "tmp") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.063274 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.064402 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca" (OuterVolumeSpecName: "client-ca") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.066006 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config" (OuterVolumeSpecName: "config") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.067015 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.069194 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s" (OuterVolumeSpecName: "kube-api-access-tr65s") pod "e21b028a-7b09-4b86-9712-63820ff56d55" (UID: "e21b028a-7b09-4b86-9712-63820ff56d55"). InnerVolumeSpecName "kube-api-access-tr65s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.069618 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.069870 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8" (OuterVolumeSpecName: "kube-api-access-xj7b8") pod "01eff4cc-010a-4ba2-87a4-2dd5850dab4b" (UID: "01eff4cc-010a-4ba2-87a4-2dd5850dab4b"). InnerVolumeSpecName "kube-api-access-xj7b8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.077276 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e21b028a-7b09-4b86-9712-63820ff56d55" (UID: "e21b028a-7b09-4b86-9712-63820ff56d55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163371 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163426 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59b5z\" (UniqueName: \"kubernetes.io/projected/f7957dd6-9a07-4b90-a6d8-c6d651348abf-kube-api-access-59b5z\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163460 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7957dd6-9a07-4b90-a6d8-c6d651348abf-tmp\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163490 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163514 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26tqc\" (UniqueName: \"kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163551 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163742 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-config\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163773 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7957dd6-9a07-4b90-a6d8-c6d651348abf-serving-cert\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163795 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163819 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163837 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-client-ca\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163915 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163927 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e21b028a-7b09-4b86-9712-63820ff56d55-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163936 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xj7b8\" (UniqueName: \"kubernetes.io/projected/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-kube-api-access-xj7b8\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163945 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163954 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163962 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tr65s\" (UniqueName: \"kubernetes.io/projected/e21b028a-7b09-4b86-9712-63820ff56d55-kube-api-access-tr65s\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163972 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e21b028a-7b09-4b86-9712-63820ff56d55-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.163979 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.164092 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01eff4cc-010a-4ba2-87a4-2dd5850dab4b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.164872 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.165335 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.167431 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.167563 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.171875 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.182419 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tqc\" (UniqueName: \"kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc\") pod \"controller-manager-7cc7c77df5-dhjmg\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.264947 5116 generic.go:358] "Generic (PLEG): container finished" podID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" containerID="dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66" exitCode=0 Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.265062 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.265459 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" event={"ID":"01eff4cc-010a-4ba2-87a4-2dd5850dab4b","Type":"ContainerDied","Data":"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66"} Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.265600 5116 scope.go:117] "RemoveContainer" containerID="dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.266039 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fsj7q" event={"ID":"01eff4cc-010a-4ba2-87a4-2dd5850dab4b","Type":"ContainerDied","Data":"2400084f7bc8cf4b764e31ef01f74d8ed6a6a8009f9c6c7effdb6192d6db6479"} Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.266679 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59b5z\" (UniqueName: \"kubernetes.io/projected/f7957dd6-9a07-4b90-a6d8-c6d651348abf-kube-api-access-59b5z\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.266750 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7957dd6-9a07-4b90-a6d8-c6d651348abf-tmp\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.267329 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7957dd6-9a07-4b90-a6d8-c6d651348abf-tmp\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.267515 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-config\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.267554 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7957dd6-9a07-4b90-a6d8-c6d651348abf-serving-cert\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.267641 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-client-ca\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.269315 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-client-ca\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.270044 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7957dd6-9a07-4b90-a6d8-c6d651348abf-config\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.271548 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7957dd6-9a07-4b90-a6d8-c6d651348abf-serving-cert\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.271847 5116 generic.go:358] "Generic (PLEG): container finished" podID="e21b028a-7b09-4b86-9712-63820ff56d55" containerID="8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6" exitCode=0 Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.271920 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" event={"ID":"e21b028a-7b09-4b86-9712-63820ff56d55","Type":"ContainerDied","Data":"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6"} Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.271962 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" event={"ID":"e21b028a-7b09-4b86-9712-63820ff56d55","Type":"ContainerDied","Data":"67909c6621a4cb8ebc6f61418eeb01fb8ee82ae4e727cc13e431639d07541126"} Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.272078 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.287309 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59b5z\" (UniqueName: \"kubernetes.io/projected/f7957dd6-9a07-4b90-a6d8-c6d651348abf-kube-api-access-59b5z\") pod \"route-controller-manager-569467d9f8-87crz\" (UID: \"f7957dd6-9a07-4b90-a6d8-c6d651348abf\") " pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.289866 5116 scope.go:117] "RemoveContainer" containerID="dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66" Dec 12 16:19:56 crc kubenswrapper[5116]: E1212 16:19:56.290456 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66\": container with ID starting with dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66 not found: ID does not exist" containerID="dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.290561 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66"} err="failed to get container status \"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66\": rpc error: code = NotFound desc = could not find container \"dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66\": container with ID starting with dddbe1a1c8bf631552460bab149eb726d699891a9acf0a3e172834d49fcd3f66 not found: ID does not exist" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.290620 5116 scope.go:117] "RemoveContainer" containerID="8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.313645 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.316343 5116 scope.go:117] "RemoveContainer" containerID="8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6" Dec 12 16:19:56 crc kubenswrapper[5116]: E1212 16:19:56.316786 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6\": container with ID starting with 8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6 not found: ID does not exist" containerID="8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.316840 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6"} err="failed to get container status \"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6\": rpc error: code = NotFound desc = could not find container \"8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6\": container with ID starting with 8884e98e9e2a489a529f78725506ebb6265637f89f29c15517508fe69d6cc8b6 not found: ID does not exist" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.320890 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fsj7q"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.326694 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.327585 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.333059 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vq6gq"] Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.355797 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:56 crc kubenswrapper[5116]: I1212 16:19:56.585534 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:19:57 crc kubenswrapper[5116]: I1212 16:19:57.215392 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz"] Dec 12 16:19:57 crc kubenswrapper[5116]: W1212 16:19:57.228185 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7957dd6_9a07_4b90_a6d8_c6d651348abf.slice/crio-db21693b7e9b82ce3afe3c69dacba9158e5511b360dc61332733f5bcbed6ff75 WatchSource:0}: Error finding container db21693b7e9b82ce3afe3c69dacba9158e5511b360dc61332733f5bcbed6ff75: Status 404 returned error can't find the container with id db21693b7e9b82ce3afe3c69dacba9158e5511b360dc61332733f5bcbed6ff75 Dec 12 16:19:57 crc kubenswrapper[5116]: I1212 16:19:57.285780 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" event={"ID":"f7957dd6-9a07-4b90-a6d8-c6d651348abf","Type":"ContainerStarted","Data":"db21693b7e9b82ce3afe3c69dacba9158e5511b360dc61332733f5bcbed6ff75"} Dec 12 16:19:57 crc kubenswrapper[5116]: I1212 16:19:57.289579 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" event={"ID":"838df8f3-844e-4749-b556-1fa730063051","Type":"ContainerStarted","Data":"aa87aa0e3d08ff325587d990e3e1084b478be250856a2e71983af8f46e471f19"} Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.046274 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47318: no serving certificate available for the kubelet" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.053046 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01eff4cc-010a-4ba2-87a4-2dd5850dab4b" path="/var/lib/kubelet/pods/01eff4cc-010a-4ba2-87a4-2dd5850dab4b/volumes" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.054034 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21b028a-7b09-4b86-9712-63820ff56d55" path="/var/lib/kubelet/pods/e21b028a-7b09-4b86-9712-63820ff56d55/volumes" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.296615 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" event={"ID":"838df8f3-844e-4749-b556-1fa730063051","Type":"ContainerStarted","Data":"e0cd9a07d78a5cffefdd3543c40a7892d008796a6ee326546a8ebb3e6601aeb3"} Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.297426 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.299034 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" event={"ID":"f7957dd6-9a07-4b90-a6d8-c6d651348abf","Type":"ContainerStarted","Data":"655ae0e4342a6ce26efb3f0890390f853ba740679a27f425b17561dc4850be23"} Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.299354 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.304697 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.309133 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.322151 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" podStartSLOduration=3.322126505 podStartE2EDuration="3.322126505s" podCreationTimestamp="2025-12-12 16:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:58.320010869 +0000 UTC m=+292.784222655" watchObservedRunningTime="2025-12-12 16:19:58.322126505 +0000 UTC m=+292.786338271" Dec 12 16:19:58 crc kubenswrapper[5116]: I1212 16:19:58.344267 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-569467d9f8-87crz" podStartSLOduration=3.344247468 podStartE2EDuration="3.344247468s" podCreationTimestamp="2025-12-12 16:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:58.343619592 +0000 UTC m=+292.807831368" watchObservedRunningTime="2025-12-12 16:19:58.344247468 +0000 UTC m=+292.808459234" Dec 12 16:20:03 crc kubenswrapper[5116]: I1212 16:20:03.545348 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:20:06 crc kubenswrapper[5116]: I1212 16:20:06.179459 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:20:06 crc kubenswrapper[5116]: I1212 16:20:06.181683 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:20:06 crc kubenswrapper[5116]: I1212 16:20:06.263448 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:20:06 crc kubenswrapper[5116]: I1212 16:20:06.263456 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:20:08 crc kubenswrapper[5116]: I1212 16:20:08.556475 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 16:20:15 crc kubenswrapper[5116]: I1212 16:20:15.508472 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:20:15 crc kubenswrapper[5116]: I1212 16:20:15.509797 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" podUID="838df8f3-844e-4749-b556-1fa730063051" containerName="controller-manager" containerID="cri-o://e0cd9a07d78a5cffefdd3543c40a7892d008796a6ee326546a8ebb3e6601aeb3" gracePeriod=30 Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.475532 5116 generic.go:358] "Generic (PLEG): container finished" podID="838df8f3-844e-4749-b556-1fa730063051" containerID="e0cd9a07d78a5cffefdd3543c40a7892d008796a6ee326546a8ebb3e6601aeb3" exitCode=0 Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.475636 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" event={"ID":"838df8f3-844e-4749-b556-1fa730063051","Type":"ContainerDied","Data":"e0cd9a07d78a5cffefdd3543c40a7892d008796a6ee326546a8ebb3e6601aeb3"} Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.799719 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.828868 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj"] Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.829597 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="838df8f3-844e-4749-b556-1fa730063051" containerName="controller-manager" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.829623 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="838df8f3-844e-4749-b556-1fa730063051" containerName="controller-manager" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.829723 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="838df8f3-844e-4749-b556-1fa730063051" containerName="controller-manager" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.834007 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.845442 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj"] Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904414 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904513 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904599 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904657 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26tqc\" (UniqueName: \"kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904725 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904840 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config\") pod \"838df8f3-844e-4749-b556-1fa730063051\" (UID: \"838df8f3-844e-4749-b556-1fa730063051\") " Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.904989 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9xn\" (UniqueName: \"kubernetes.io/projected/16958610-fe9a-45de-b024-526b1e2f3d5e-kube-api-access-xp9xn\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.905069 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-config\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.905126 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16958610-fe9a-45de-b024-526b1e2f3d5e-tmp\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.905154 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-client-ca\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.905175 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16958610-fe9a-45de-b024-526b1e2f3d5e-serving-cert\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906292 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906143 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp" (OuterVolumeSpecName: "tmp") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906392 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca" (OuterVolumeSpecName: "client-ca") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906435 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-proxy-ca-bundles\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906439 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config" (OuterVolumeSpecName: "config") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906550 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906568 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/838df8f3-844e-4749-b556-1fa730063051-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.906605 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.915618 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc" (OuterVolumeSpecName: "kube-api-access-26tqc") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "kube-api-access-26tqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:16 crc kubenswrapper[5116]: I1212 16:20:16.918286 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "838df8f3-844e-4749-b556-1fa730063051" (UID: "838df8f3-844e-4749-b556-1fa730063051"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.007953 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-config\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008036 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16958610-fe9a-45de-b024-526b1e2f3d5e-tmp\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008066 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-client-ca\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008092 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16958610-fe9a-45de-b024-526b1e2f3d5e-serving-cert\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008174 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-proxy-ca-bundles\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008235 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9xn\" (UniqueName: \"kubernetes.io/projected/16958610-fe9a-45de-b024-526b1e2f3d5e-kube-api-access-xp9xn\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008325 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838df8f3-844e-4749-b556-1fa730063051-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008342 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26tqc\" (UniqueName: \"kubernetes.io/projected/838df8f3-844e-4749-b556-1fa730063051-kube-api-access-26tqc\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008358 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838df8f3-844e-4749-b556-1fa730063051-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.008640 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16958610-fe9a-45de-b024-526b1e2f3d5e-tmp\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.009553 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-client-ca\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.009564 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-proxy-ca-bundles\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.009654 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16958610-fe9a-45de-b024-526b1e2f3d5e-config\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.016498 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16958610-fe9a-45de-b024-526b1e2f3d5e-serving-cert\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.031844 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9xn\" (UniqueName: \"kubernetes.io/projected/16958610-fe9a-45de-b024-526b1e2f3d5e-kube-api-access-xp9xn\") pod \"controller-manager-56ffdf9c6-6wvvj\" (UID: \"16958610-fe9a-45de-b024-526b1e2f3d5e\") " pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.162976 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.378414 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj"] Dec 12 16:20:17 crc kubenswrapper[5116]: W1212 16:20:17.387034 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16958610_fe9a_45de_b024_526b1e2f3d5e.slice/crio-48b48f09019d0043037b734d3d8ceeb9049691522baa22e28a88becd7c5725fb WatchSource:0}: Error finding container 48b48f09019d0043037b734d3d8ceeb9049691522baa22e28a88becd7c5725fb: Status 404 returned error can't find the container with id 48b48f09019d0043037b734d3d8ceeb9049691522baa22e28a88becd7c5725fb Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.390214 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.483935 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" event={"ID":"838df8f3-844e-4749-b556-1fa730063051","Type":"ContainerDied","Data":"aa87aa0e3d08ff325587d990e3e1084b478be250856a2e71983af8f46e471f19"} Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.484000 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.484006 5116 scope.go:117] "RemoveContainer" containerID="e0cd9a07d78a5cffefdd3543c40a7892d008796a6ee326546a8ebb3e6601aeb3" Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.489263 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" event={"ID":"16958610-fe9a-45de-b024-526b1e2f3d5e","Type":"ContainerStarted","Data":"48b48f09019d0043037b734d3d8ceeb9049691522baa22e28a88becd7c5725fb"} Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.515243 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:20:17 crc kubenswrapper[5116]: I1212 16:20:17.519019 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cc7c77df5-dhjmg"] Dec 12 16:20:18 crc kubenswrapper[5116]: I1212 16:20:18.058608 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838df8f3-844e-4749-b556-1fa730063051" path="/var/lib/kubelet/pods/838df8f3-844e-4749-b556-1fa730063051/volumes" Dec 12 16:20:18 crc kubenswrapper[5116]: I1212 16:20:18.505745 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" event={"ID":"16958610-fe9a-45de-b024-526b1e2f3d5e","Type":"ContainerStarted","Data":"e981028ee046a4eea032b96acb9276474700cbc835924140a1626f85ca5b3202"} Dec 12 16:20:18 crc kubenswrapper[5116]: I1212 16:20:18.508532 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:18 crc kubenswrapper[5116]: I1212 16:20:18.517035 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" Dec 12 16:20:18 crc kubenswrapper[5116]: I1212 16:20:18.552174 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56ffdf9c6-6wvvj" podStartSLOduration=3.552141668 podStartE2EDuration="3.552141668s" podCreationTimestamp="2025-12-12 16:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:20:18.546544208 +0000 UTC m=+313.010755984" watchObservedRunningTime="2025-12-12 16:20:18.552141668 +0000 UTC m=+313.016353464" Dec 12 16:20:30 crc kubenswrapper[5116]: I1212 16:20:30.614193 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.001779 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.005406 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zt54g" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="registry-server" containerID="cri-o://13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" gracePeriod=30 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.012249 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.012753 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mksww" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="registry-server" containerID="cri-o://656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" gracePeriod=30 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.035358 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.035871 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" containerID="cri-o://c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf" gracePeriod=30 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.048322 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.048714 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2qt2j" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="registry-server" containerID="cri-o://ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f" gracePeriod=30 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.063794 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.064283 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmzmp" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="registry-server" containerID="cri-o://9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b" gracePeriod=30 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.071902 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-sf4r2"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.104827 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.128058 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-sf4r2"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.194072 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7xd\" (UniqueName: \"kubernetes.io/projected/8a5bb5d8-160c-4e91-8408-8898560f7a5c-kube-api-access-7p7xd\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.194140 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a5bb5d8-160c-4e91-8408-8898560f7a5c-tmp\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.194161 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.194220 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.295414 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a5bb5d8-160c-4e91-8408-8898560f7a5c-tmp\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.295463 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.295502 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.295555 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7xd\" (UniqueName: \"kubernetes.io/projected/8a5bb5d8-160c-4e91-8408-8898560f7a5c-kube-api-access-7p7xd\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.299716 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a5bb5d8-160c-4e91-8408-8898560f7a5c-tmp\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.300337 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.313086 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a5bb5d8-160c-4e91-8408-8898560f7a5c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.316031 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7xd\" (UniqueName: \"kubernetes.io/projected/8a5bb5d8-160c-4e91-8408-8898560f7a5c-kube-api-access-7p7xd\") pod \"marketplace-operator-547dbd544d-sf4r2\" (UID: \"8a5bb5d8-160c-4e91-8408-8898560f7a5c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.345514 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.382024 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e is running failed: container process not found" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.382590 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e is running failed: container process not found" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.382931 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e is running failed: container process not found" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.382974 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-zt54g" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="registry-server" probeResult="unknown" Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.389982 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.400280 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.404770 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.404824 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-mksww" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="registry-server" probeResult="unknown" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.417160 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.522374 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.608620 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities\") pod \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.608675 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content\") pod \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.608749 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56k45\" (UniqueName: \"kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45\") pod \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\" (UID: \"c33c5b2d-507a-41c8-884d-e5ec63c2894c\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.610352 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities" (OuterVolumeSpecName: "utilities") pod "c33c5b2d-507a-41c8-884d-e5ec63c2894c" (UID: "c33c5b2d-507a-41c8-884d-e5ec63c2894c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.611316 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.619395 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45" (OuterVolumeSpecName: "kube-api-access-56k45") pod "c33c5b2d-507a-41c8-884d-e5ec63c2894c" (UID: "c33c5b2d-507a-41c8-884d-e5ec63c2894c"). InnerVolumeSpecName "kube-api-access-56k45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.622480 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.649523 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c33c5b2d-507a-41c8-884d-e5ec63c2894c" (UID: "c33c5b2d-507a-41c8-884d-e5ec63c2894c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.678352 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-sf4r2"] Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.709739 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp\") pod \"b9c44a8b-640d-4806-a985-d12ada8b88dd\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.709909 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities\") pod \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.709981 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics\") pod \"b9c44a8b-640d-4806-a985-d12ada8b88dd\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710025 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn\") pod \"b9c44a8b-640d-4806-a985-d12ada8b88dd\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710095 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content\") pod \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710151 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27c6\" (UniqueName: \"kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6\") pod \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\" (UID: \"f85c27f2-e8ee-400f-8f2a-5e389b670e09\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710151 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp" (OuterVolumeSpecName: "tmp") pod "b9c44a8b-640d-4806-a985-d12ada8b88dd" (UID: "b9c44a8b-640d-4806-a985-d12ada8b88dd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca\") pod \"b9c44a8b-640d-4806-a985-d12ada8b88dd\" (UID: \"b9c44a8b-640d-4806-a985-d12ada8b88dd\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710400 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9c44a8b-640d-4806-a985-d12ada8b88dd-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710418 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710430 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33c5b2d-507a-41c8-884d-e5ec63c2894c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.710442 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56k45\" (UniqueName: \"kubernetes.io/projected/c33c5b2d-507a-41c8-884d-e5ec63c2894c-kube-api-access-56k45\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.711146 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b9c44a8b-640d-4806-a985-d12ada8b88dd" (UID: "b9c44a8b-640d-4806-a985-d12ada8b88dd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.714870 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6" (OuterVolumeSpecName: "kube-api-access-b27c6") pod "f85c27f2-e8ee-400f-8f2a-5e389b670e09" (UID: "f85c27f2-e8ee-400f-8f2a-5e389b670e09"). InnerVolumeSpecName "kube-api-access-b27c6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.719503 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b9c44a8b-640d-4806-a985-d12ada8b88dd" (UID: "b9c44a8b-640d-4806-a985-d12ada8b88dd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.719833 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities" (OuterVolumeSpecName: "utilities") pod "f85c27f2-e8ee-400f-8f2a-5e389b670e09" (UID: "f85c27f2-e8ee-400f-8f2a-5e389b670e09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.720354 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn" (OuterVolumeSpecName: "kube-api-access-gx5hn") pod "b9c44a8b-640d-4806-a985-d12ada8b88dd" (UID: "b9c44a8b-640d-4806-a985-d12ada8b88dd"). InnerVolumeSpecName "kube-api-access-gx5hn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.736086 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f85c27f2-e8ee-400f-8f2a-5e389b670e09" (UID: "f85c27f2-e8ee-400f-8f2a-5e389b670e09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.811200 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities\") pod \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.811825 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content\") pod \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.811885 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z884\" (UniqueName: \"kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884\") pod \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\" (UID: \"01d69feb-2b7f-4fa0-9d55-d8d13736324d\") " Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812321 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812347 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/b9c44a8b-640d-4806-a985-d12ada8b88dd-kube-api-access-gx5hn\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812359 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812374 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b27c6\" (UniqueName: \"kubernetes.io/projected/f85c27f2-e8ee-400f-8f2a-5e389b670e09-kube-api-access-b27c6\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812388 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9c44a8b-640d-4806-a985-d12ada8b88dd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812400 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f85c27f2-e8ee-400f-8f2a-5e389b670e09-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.812676 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities" (OuterVolumeSpecName: "utilities") pod "01d69feb-2b7f-4fa0-9d55-d8d13736324d" (UID: "01d69feb-2b7f-4fa0-9d55-d8d13736324d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.818641 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.820297 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884" (OuterVolumeSpecName: "kube-api-access-8z884") pod "01d69feb-2b7f-4fa0-9d55-d8d13736324d" (UID: "01d69feb-2b7f-4fa0-9d55-d8d13736324d"). InnerVolumeSpecName "kube-api-access-8z884". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.913704 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.913737 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8z884\" (UniqueName: \"kubernetes.io/projected/01d69feb-2b7f-4fa0-9d55-d8d13736324d-kube-api-access-8z884\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.919636 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01d69feb-2b7f-4fa0-9d55-d8d13736324d" (UID: "01d69feb-2b7f-4fa0-9d55-d8d13736324d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.940370 5116 generic.go:358] "Generic (PLEG): container finished" podID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerID="c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf" exitCode=0 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.940469 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.940495 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" event={"ID":"b9c44a8b-640d-4806-a985-d12ada8b88dd","Type":"ContainerDied","Data":"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.940530 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lkvbc" event={"ID":"b9c44a8b-640d-4806-a985-d12ada8b88dd","Type":"ContainerDied","Data":"5d138f01bf35f517d59623aebb7a7bba19ca860bc792cff6aaf09d603ce763d7"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.940552 5116 scope.go:117] "RemoveContainer" containerID="c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.945324 5116 generic.go:358] "Generic (PLEG): container finished" podID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerID="ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f" exitCode=0 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.945399 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qt2j" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.945410 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerDied","Data":"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.945459 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qt2j" event={"ID":"f85c27f2-e8ee-400f-8f2a-5e389b670e09","Type":"ContainerDied","Data":"91f2dfdc602b4ad3fcaa2f2b197725a2e9881e76fd788d76545d20a209581195"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.948302 5116 generic.go:358] "Generic (PLEG): container finished" podID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" exitCode=0 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.948339 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerDied","Data":"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.948423 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt54g" event={"ID":"c33c5b2d-507a-41c8-884d-e5ec63c2894c","Type":"ContainerDied","Data":"3c124c5de9b2a466f8f72cea880f2205e141255528fbfd3c1b2722b0c844d209"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.948633 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt54g" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.960534 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" exitCode=0 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.960650 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerDied","Data":"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.960682 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mksww" event={"ID":"8d9629b0-298f-4c07-a908-e83a59c4c402","Type":"ContainerDied","Data":"b697ff7843247cef5dcf863efe90098ea1be941bad228a8f89c4cb0e92c0364c"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.960783 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mksww" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.964849 5116 generic.go:358] "Generic (PLEG): container finished" podID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerID="9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b" exitCode=0 Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.964897 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmzmp" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.964932 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerDied","Data":"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.964988 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmzmp" event={"ID":"01d69feb-2b7f-4fa0-9d55-d8d13736324d","Type":"ContainerDied","Data":"1cfa1da787c6fc2ad08c84461b4ab4c34831254f8dcfbf9ca60ece99fb4cf1d5"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.969144 5116 scope.go:117] "RemoveContainer" containerID="c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf" Dec 12 16:21:17 crc kubenswrapper[5116]: E1212 16:21:17.969398 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf\": container with ID starting with c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf not found: ID does not exist" containerID="c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.969433 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf"} err="failed to get container status \"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf\": rpc error: code = NotFound desc = could not find container \"c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf\": container with ID starting with c1a4b04f061c4b5d61bfda985627df5e5311b441dfb5119eb8de4aed37a95ebf not found: ID does not exist" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.969452 5116 scope.go:117] "RemoveContainer" containerID="ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.971487 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" event={"ID":"8a5bb5d8-160c-4e91-8408-8898560f7a5c","Type":"ContainerStarted","Data":"f533da498b43bd150ce1db68174018b7beea39f7f82c6c8e906bc337406348eb"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.971525 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" event={"ID":"8a5bb5d8-160c-4e91-8408-8898560f7a5c","Type":"ContainerStarted","Data":"d3582f3e0e4355da94fb60047b16dfb8bd3dc3347ff1a164c2a41090a30c0fad"} Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.972247 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.973843 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-sf4r2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.973885 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" podUID="8a5bb5d8-160c-4e91-8408-8898560f7a5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.991486 5116 scope.go:117] "RemoveContainer" containerID="9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c" Dec 12 16:21:17 crc kubenswrapper[5116]: I1212 16:21:17.994430 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.005432 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qt2j"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.015651 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities\") pod \"8d9629b0-298f-4c07-a908-e83a59c4c402\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.015724 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxzkd\" (UniqueName: \"kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd\") pod \"8d9629b0-298f-4c07-a908-e83a59c4c402\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.015837 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content\") pod \"8d9629b0-298f-4c07-a908-e83a59c4c402\" (UID: \"8d9629b0-298f-4c07-a908-e83a59c4c402\") " Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.016155 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d69feb-2b7f-4fa0-9d55-d8d13736324d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.016776 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities" (OuterVolumeSpecName: "utilities") pod "8d9629b0-298f-4c07-a908-e83a59c4c402" (UID: "8d9629b0-298f-4c07-a908-e83a59c4c402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.024312 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd" (OuterVolumeSpecName: "kube-api-access-mxzkd") pod "8d9629b0-298f-4c07-a908-e83a59c4c402" (UID: "8d9629b0-298f-4c07-a908-e83a59c4c402"). InnerVolumeSpecName "kube-api-access-mxzkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.038231 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.038552 5116 scope.go:117] "RemoveContainer" containerID="3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.059509 5116 scope.go:117] "RemoveContainer" containerID="ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.059886 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f\": container with ID starting with ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f not found: ID does not exist" containerID="ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.059920 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f"} err="failed to get container status \"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f\": rpc error: code = NotFound desc = could not find container \"ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f\": container with ID starting with ab80874ee476b6818b6ee0d13d567023a3a5874823c9354fd9e505abcb756e1f not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.059946 5116 scope.go:117] "RemoveContainer" containerID="9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.060347 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c\": container with ID starting with 9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c not found: ID does not exist" containerID="9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.060491 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c"} err="failed to get container status \"9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c\": rpc error: code = NotFound desc = could not find container \"9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c\": container with ID starting with 9e65467263c33bacd38af1310339ff1359485b17ed580c49414ca85df3035e5c not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.060631 5116 scope.go:117] "RemoveContainer" containerID="3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.060995 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592\": container with ID starting with 3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592 not found: ID does not exist" containerID="3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.061031 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592"} err="failed to get container status \"3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592\": rpc error: code = NotFound desc = could not find container \"3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592\": container with ID starting with 3633264c3f06cff54ba7e6419fe2530cf8387bba7f58bfac61992c529950b592 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.061048 5116 scope.go:117] "RemoveContainer" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.061497 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" path="/var/lib/kubelet/pods/f85c27f2-e8ee-400f-8f2a-5e389b670e09/volumes" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.064886 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lkvbc"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.065222 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.073028 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zt54g"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.079927 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.080962 5116 scope.go:117] "RemoveContainer" containerID="5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.084372 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmzmp"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.087238 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" podStartSLOduration=1.087221663 podStartE2EDuration="1.087221663s" podCreationTimestamp="2025-12-12 16:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:21:18.045629353 +0000 UTC m=+372.509841109" watchObservedRunningTime="2025-12-12 16:21:18.087221663 +0000 UTC m=+372.551433419" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.100671 5116 scope.go:117] "RemoveContainer" containerID="67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.104651 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d9629b0-298f-4c07-a908-e83a59c4c402" (UID: "8d9629b0-298f-4c07-a908-e83a59c4c402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.117129 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxzkd\" (UniqueName: \"kubernetes.io/projected/8d9629b0-298f-4c07-a908-e83a59c4c402-kube-api-access-mxzkd\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.117176 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.117189 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9629b0-298f-4c07-a908-e83a59c4c402-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.125694 5116 scope.go:117] "RemoveContainer" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.126311 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e\": container with ID starting with 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e not found: ID does not exist" containerID="13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.126354 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e"} err="failed to get container status \"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e\": rpc error: code = NotFound desc = could not find container \"13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e\": container with ID starting with 13063796d22ca21de0ee939d045ee0cf143ce76ff8c295f1b99d67f0685bc74e not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.126387 5116 scope.go:117] "RemoveContainer" containerID="5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.126843 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e\": container with ID starting with 5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e not found: ID does not exist" containerID="5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.126886 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e"} err="failed to get container status \"5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e\": rpc error: code = NotFound desc = could not find container \"5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e\": container with ID starting with 5b71cd67300d9213ae59bb8df2365c55551440b9d2ff43d9a72d4f2da4c2a15e not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.126914 5116 scope.go:117] "RemoveContainer" containerID="67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.127285 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7\": container with ID starting with 67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7 not found: ID does not exist" containerID="67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.127332 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7"} err="failed to get container status \"67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7\": rpc error: code = NotFound desc = could not find container \"67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7\": container with ID starting with 67da9588bc3c466e67c3b92192e1b19e8a0569bd4c93563d3f442fc6ece5c9d7 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.127361 5116 scope.go:117] "RemoveContainer" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.146473 5116 scope.go:117] "RemoveContainer" containerID="503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.169131 5116 scope.go:117] "RemoveContainer" containerID="09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.223455 5116 scope.go:117] "RemoveContainer" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.224004 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23\": container with ID starting with 656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23 not found: ID does not exist" containerID="656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.224057 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23"} err="failed to get container status \"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23\": rpc error: code = NotFound desc = could not find container \"656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23\": container with ID starting with 656f7ca1541208a7f3d0c9024c4241d1ca85f4318fb13f6a701b175a61e7ee23 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.224095 5116 scope.go:117] "RemoveContainer" containerID="503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.224510 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6\": container with ID starting with 503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6 not found: ID does not exist" containerID="503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.224588 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6"} err="failed to get container status \"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6\": rpc error: code = NotFound desc = could not find container \"503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6\": container with ID starting with 503e516b31d2fb5b519c7dcba66bc91fdfcac291fcf8219f92d2eb974d5df5b6 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.224629 5116 scope.go:117] "RemoveContainer" containerID="09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.224955 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4\": container with ID starting with 09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4 not found: ID does not exist" containerID="09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.224995 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4"} err="failed to get container status \"09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4\": rpc error: code = NotFound desc = could not find container \"09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4\": container with ID starting with 09bf698613949270c67deb32da8ab12cfea84ee779c308a1c7b3e6a4891542f4 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.225019 5116 scope.go:117] "RemoveContainer" containerID="9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.246998 5116 scope.go:117] "RemoveContainer" containerID="466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.267915 5116 scope.go:117] "RemoveContainer" containerID="e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.288240 5116 scope.go:117] "RemoveContainer" containerID="9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.288761 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b\": container with ID starting with 9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b not found: ID does not exist" containerID="9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.288811 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b"} err="failed to get container status \"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b\": rpc error: code = NotFound desc = could not find container \"9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b\": container with ID starting with 9a169e4e3a4e4656a1b0462082e9dbf9134d7a6309368da63b0b7e785e25100b not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.288849 5116 scope.go:117] "RemoveContainer" containerID="466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.289271 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168\": container with ID starting with 466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168 not found: ID does not exist" containerID="466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.289307 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168"} err="failed to get container status \"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168\": rpc error: code = NotFound desc = could not find container \"466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168\": container with ID starting with 466b8b2366e66387e7e9fd71990ced893407061b5d9f6f135b46048c463be168 not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.289333 5116 scope.go:117] "RemoveContainer" containerID="e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e" Dec 12 16:21:18 crc kubenswrapper[5116]: E1212 16:21:18.289657 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e\": container with ID starting with e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e not found: ID does not exist" containerID="e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.289690 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e"} err="failed to get container status \"e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e\": rpc error: code = NotFound desc = could not find container \"e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e\": container with ID starting with e799c4d6bb2a1f5eb57effa99c2f25941339d1c68e44819b52378f7979335b5e not found: ID does not exist" Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.319477 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.323844 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mksww"] Dec 12 16:21:18 crc kubenswrapper[5116]: I1212 16:21:18.984968 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-sf4r2" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217207 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjdvs"] Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217933 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217948 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217959 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217965 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217975 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217982 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217992 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.217997 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218007 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218012 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218023 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218029 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218040 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218045 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="extract-utilities" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218052 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218057 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218066 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218071 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218079 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218084 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218096 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218131 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218144 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218150 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218160 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218167 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="extract-content" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218248 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218256 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f85c27f2-e8ee-400f-8f2a-5e389b670e09" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218265 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" containerName="marketplace-operator" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218275 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.218281 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" containerName="registry-server" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.228097 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.231681 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.255821 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsp6\" (UniqueName: \"kubernetes.io/projected/9d42a41e-9017-4394-84a0-12172fb8d861-kube-api-access-wmsp6\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.256078 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-utilities\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.256218 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-catalog-content\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.256383 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjdvs"] Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.357198 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-catalog-content\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.357288 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsp6\" (UniqueName: \"kubernetes.io/projected/9d42a41e-9017-4394-84a0-12172fb8d861-kube-api-access-wmsp6\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.357333 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-utilities\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.357870 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-utilities\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.357995 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42a41e-9017-4394-84a0-12172fb8d861-catalog-content\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.380283 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsp6\" (UniqueName: \"kubernetes.io/projected/9d42a41e-9017-4394-84a0-12172fb8d861-kube-api-access-wmsp6\") pod \"certified-operators-jjdvs\" (UID: \"9d42a41e-9017-4394-84a0-12172fb8d861\") " pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.425082 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vqhth"] Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.430991 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqhth"] Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.431170 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.434821 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.458848 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplh4\" (UniqueName: \"kubernetes.io/projected/54514c91-e486-40d1-a410-2374a646300a-kube-api-access-bplh4\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.459130 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-utilities\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.459224 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-catalog-content\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.553769 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.560366 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bplh4\" (UniqueName: \"kubernetes.io/projected/54514c91-e486-40d1-a410-2374a646300a-kube-api-access-bplh4\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.560450 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-utilities\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.560473 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-catalog-content\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.560876 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-catalog-content\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.561306 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54514c91-e486-40d1-a410-2374a646300a-utilities\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.582926 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplh4\" (UniqueName: \"kubernetes.io/projected/54514c91-e486-40d1-a410-2374a646300a-kube-api-access-bplh4\") pod \"community-operators-vqhth\" (UID: \"54514c91-e486-40d1-a410-2374a646300a\") " pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.759283 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:19 crc kubenswrapper[5116]: I1212 16:21:19.996770 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjdvs"] Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.000638 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqhth"] Dec 12 16:21:20 crc kubenswrapper[5116]: W1212 16:21:20.012579 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54514c91_e486_40d1_a410_2374a646300a.slice/crio-3331cac8f7fa9ae3570ddf1657816fa80b976c8bb6435c3fdac7cc8be35fef71 WatchSource:0}: Error finding container 3331cac8f7fa9ae3570ddf1657816fa80b976c8bb6435c3fdac7cc8be35fef71: Status 404 returned error can't find the container with id 3331cac8f7fa9ae3570ddf1657816fa80b976c8bb6435c3fdac7cc8be35fef71 Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.051875 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d69feb-2b7f-4fa0-9d55-d8d13736324d" path="/var/lib/kubelet/pods/01d69feb-2b7f-4fa0-9d55-d8d13736324d/volumes" Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.052642 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9629b0-298f-4c07-a908-e83a59c4c402" path="/var/lib/kubelet/pods/8d9629b0-298f-4c07-a908-e83a59c4c402/volumes" Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.053299 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c44a8b-640d-4806-a985-d12ada8b88dd" path="/var/lib/kubelet/pods/b9c44a8b-640d-4806-a985-d12ada8b88dd/volumes" Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.054199 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33c5b2d-507a-41c8-884d-e5ec63c2894c" path="/var/lib/kubelet/pods/c33c5b2d-507a-41c8-884d-e5ec63c2894c/volumes" Dec 12 16:21:20 crc kubenswrapper[5116]: I1212 16:21:20.999627 5116 generic.go:358] "Generic (PLEG): container finished" podID="54514c91-e486-40d1-a410-2374a646300a" containerID="a697f0ad2a05882949f8017b6be7e1fee5c060235940f47588fbf435a61e01a8" exitCode=0 Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:20.999798 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqhth" event={"ID":"54514c91-e486-40d1-a410-2374a646300a","Type":"ContainerDied","Data":"a697f0ad2a05882949f8017b6be7e1fee5c060235940f47588fbf435a61e01a8"} Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:20.999828 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqhth" event={"ID":"54514c91-e486-40d1-a410-2374a646300a","Type":"ContainerStarted","Data":"3331cac8f7fa9ae3570ddf1657816fa80b976c8bb6435c3fdac7cc8be35fef71"} Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.001660 5116 generic.go:358] "Generic (PLEG): container finished" podID="9d42a41e-9017-4394-84a0-12172fb8d861" containerID="1b75f6e9d021d44220364068c6a1f0165e55500782535dc667c7a04634a13c74" exitCode=0 Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.001736 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjdvs" event={"ID":"9d42a41e-9017-4394-84a0-12172fb8d861","Type":"ContainerDied","Data":"1b75f6e9d021d44220364068c6a1f0165e55500782535dc667c7a04634a13c74"} Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.002141 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjdvs" event={"ID":"9d42a41e-9017-4394-84a0-12172fb8d861","Type":"ContainerStarted","Data":"8b049f04690b49dd5fab0c306496bc56f83767c55f85428ce92b9c9f75b4bb1d"} Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.635608 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.643944 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.644150 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.656083 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.687283 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.687399 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prcw8\" (UniqueName: \"kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.687451 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.788491 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prcw8\" (UniqueName: \"kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.788797 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.788849 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.789343 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.789572 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.819225 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qwx8r"] Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.820422 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prcw8\" (UniqueName: \"kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8\") pod \"redhat-marketplace-4rfj8\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.831303 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.834139 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.850476 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwx8r"] Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.889671 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-catalog-content\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.889735 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9cz\" (UniqueName: \"kubernetes.io/projected/d0e067fe-7377-4e9a-a692-7d3186ee3114-kube-api-access-8h9cz\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.889768 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-utilities\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.992714 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-catalog-content\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.992957 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9cz\" (UniqueName: \"kubernetes.io/projected/d0e067fe-7377-4e9a-a692-7d3186ee3114-kube-api-access-8h9cz\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.993096 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-utilities\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.993779 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-utilities\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:21 crc kubenswrapper[5116]: I1212 16:21:21.996535 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e067fe-7377-4e9a-a692-7d3186ee3114-catalog-content\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.015126 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9cz\" (UniqueName: \"kubernetes.io/projected/d0e067fe-7377-4e9a-a692-7d3186ee3114-kube-api-access-8h9cz\") pod \"redhat-operators-qwx8r\" (UID: \"d0e067fe-7377-4e9a-a692-7d3186ee3114\") " pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.016562 5116 generic.go:358] "Generic (PLEG): container finished" podID="9d42a41e-9017-4394-84a0-12172fb8d861" containerID="3fc91342509bdf6790289534f86d0044861772d9edf97aa9ddec84f8ab202de2" exitCode=0 Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.016646 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjdvs" event={"ID":"9d42a41e-9017-4394-84a0-12172fb8d861","Type":"ContainerDied","Data":"3fc91342509bdf6790289534f86d0044861772d9edf97aa9ddec84f8ab202de2"} Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.019385 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqhth" event={"ID":"54514c91-e486-40d1-a410-2374a646300a","Type":"ContainerStarted","Data":"581a1fef71fcfd210a9cb3be24f29b6a249acbad4f674e8ef9ceffa40b206cd0"} Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.025018 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.167185 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.472234 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:21:22 crc kubenswrapper[5116]: I1212 16:21:22.601984 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwx8r"] Dec 12 16:21:22 crc kubenswrapper[5116]: W1212 16:21:22.610569 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e067fe_7377_4e9a_a692_7d3186ee3114.slice/crio-ca9013c672a5316c77fff5fdfca65bacaa2b14f235ace5278b7016e513a43f8b WatchSource:0}: Error finding container ca9013c672a5316c77fff5fdfca65bacaa2b14f235ace5278b7016e513a43f8b: Status 404 returned error can't find the container with id ca9013c672a5316c77fff5fdfca65bacaa2b14f235ace5278b7016e513a43f8b Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.029165 5116 generic.go:358] "Generic (PLEG): container finished" podID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerID="92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c" exitCode=0 Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.029241 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerDied","Data":"92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.029329 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerStarted","Data":"d2ac77df8df44b28270a46f6744e8fbf32b7cf07683ecffe40d92cfa900d8edb"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.033488 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjdvs" event={"ID":"9d42a41e-9017-4394-84a0-12172fb8d861","Type":"ContainerStarted","Data":"64fb5f8da73121f20a02ea0a4f1d11886ef5d75d58f6291b5249964b9cdc0786"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.040909 5116 generic.go:358] "Generic (PLEG): container finished" podID="d0e067fe-7377-4e9a-a692-7d3186ee3114" containerID="23b310432aa1cd3a9eca088848032c984d5765dcd0d14ad190c530f9f2c641db" exitCode=0 Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.040999 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwx8r" event={"ID":"d0e067fe-7377-4e9a-a692-7d3186ee3114","Type":"ContainerDied","Data":"23b310432aa1cd3a9eca088848032c984d5765dcd0d14ad190c530f9f2c641db"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.041132 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwx8r" event={"ID":"d0e067fe-7377-4e9a-a692-7d3186ee3114","Type":"ContainerStarted","Data":"ca9013c672a5316c77fff5fdfca65bacaa2b14f235ace5278b7016e513a43f8b"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.048823 5116 generic.go:358] "Generic (PLEG): container finished" podID="54514c91-e486-40d1-a410-2374a646300a" containerID="581a1fef71fcfd210a9cb3be24f29b6a249acbad4f674e8ef9ceffa40b206cd0" exitCode=0 Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.048916 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqhth" event={"ID":"54514c91-e486-40d1-a410-2374a646300a","Type":"ContainerDied","Data":"581a1fef71fcfd210a9cb3be24f29b6a249acbad4f674e8ef9ceffa40b206cd0"} Dec 12 16:21:23 crc kubenswrapper[5116]: I1212 16:21:23.096301 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjdvs" podStartSLOduration=3.467827447 podStartE2EDuration="4.096277914s" podCreationTimestamp="2025-12-12 16:21:19 +0000 UTC" firstStartedPulling="2025-12-12 16:21:21.002381548 +0000 UTC m=+375.466593304" lastFinishedPulling="2025-12-12 16:21:21.630832015 +0000 UTC m=+376.095043771" observedRunningTime="2025-12-12 16:21:23.095755891 +0000 UTC m=+377.559967647" watchObservedRunningTime="2025-12-12 16:21:23.096277914 +0000 UTC m=+377.560489670" Dec 12 16:21:24 crc kubenswrapper[5116]: I1212 16:21:24.070646 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwx8r" event={"ID":"d0e067fe-7377-4e9a-a692-7d3186ee3114","Type":"ContainerStarted","Data":"af209749d7be2c3da71241b213e956edf0ad343f7200fc89552fc928d433afb1"} Dec 12 16:21:24 crc kubenswrapper[5116]: I1212 16:21:24.077526 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqhth" event={"ID":"54514c91-e486-40d1-a410-2374a646300a","Type":"ContainerStarted","Data":"f867c2ac0746e6c1a33a8bf3d3a76959305f662bcf5458ffc7263ac18f1700ea"} Dec 12 16:21:25 crc kubenswrapper[5116]: I1212 16:21:25.085380 5116 generic.go:358] "Generic (PLEG): container finished" podID="d0e067fe-7377-4e9a-a692-7d3186ee3114" containerID="af209749d7be2c3da71241b213e956edf0ad343f7200fc89552fc928d433afb1" exitCode=0 Dec 12 16:21:25 crc kubenswrapper[5116]: I1212 16:21:25.085499 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwx8r" event={"ID":"d0e067fe-7377-4e9a-a692-7d3186ee3114","Type":"ContainerDied","Data":"af209749d7be2c3da71241b213e956edf0ad343f7200fc89552fc928d433afb1"} Dec 12 16:21:25 crc kubenswrapper[5116]: I1212 16:21:25.090607 5116 generic.go:358] "Generic (PLEG): container finished" podID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerID="54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75" exitCode=0 Dec 12 16:21:25 crc kubenswrapper[5116]: I1212 16:21:25.091518 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerDied","Data":"54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75"} Dec 12 16:21:25 crc kubenswrapper[5116]: I1212 16:21:25.108087 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vqhth" podStartSLOduration=5.537127301 podStartE2EDuration="6.108055131s" podCreationTimestamp="2025-12-12 16:21:19 +0000 UTC" firstStartedPulling="2025-12-12 16:21:21.000569098 +0000 UTC m=+375.464780864" lastFinishedPulling="2025-12-12 16:21:21.571496938 +0000 UTC m=+376.035708694" observedRunningTime="2025-12-12 16:21:24.113420166 +0000 UTC m=+378.577631952" watchObservedRunningTime="2025-12-12 16:21:25.108055131 +0000 UTC m=+379.572266887" Dec 12 16:21:26 crc kubenswrapper[5116]: I1212 16:21:26.109276 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerStarted","Data":"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814"} Dec 12 16:21:26 crc kubenswrapper[5116]: I1212 16:21:26.112985 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwx8r" event={"ID":"d0e067fe-7377-4e9a-a692-7d3186ee3114","Type":"ContainerStarted","Data":"0cb1aed02e11ae33a0eb5f8a6a179a7a23cc5dccca4b18c21375780d8cee2d2a"} Dec 12 16:21:26 crc kubenswrapper[5116]: I1212 16:21:26.133256 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rfj8" podStartSLOduration=3.7219261059999997 podStartE2EDuration="5.133232418s" podCreationTimestamp="2025-12-12 16:21:21 +0000 UTC" firstStartedPulling="2025-12-12 16:21:23.030763251 +0000 UTC m=+377.494975027" lastFinishedPulling="2025-12-12 16:21:24.442069593 +0000 UTC m=+378.906281339" observedRunningTime="2025-12-12 16:21:26.12957691 +0000 UTC m=+380.593788706" watchObservedRunningTime="2025-12-12 16:21:26.133232418 +0000 UTC m=+380.597444174" Dec 12 16:21:26 crc kubenswrapper[5116]: I1212 16:21:26.155781 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qwx8r" podStartSLOduration=4.381360048 podStartE2EDuration="5.155761045s" podCreationTimestamp="2025-12-12 16:21:21 +0000 UTC" firstStartedPulling="2025-12-12 16:21:23.042272251 +0000 UTC m=+377.506484007" lastFinishedPulling="2025-12-12 16:21:23.816673248 +0000 UTC m=+378.280885004" observedRunningTime="2025-12-12 16:21:26.151622623 +0000 UTC m=+380.615834379" watchObservedRunningTime="2025-12-12 16:21:26.155761045 +0000 UTC m=+380.619972801" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.555848 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.556886 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.604245 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.759643 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.763413 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:29 crc kubenswrapper[5116]: I1212 16:21:29.812716 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:30 crc kubenswrapper[5116]: I1212 16:21:30.179731 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vqhth" Dec 12 16:21:30 crc kubenswrapper[5116]: I1212 16:21:30.190724 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjdvs" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.025714 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.026127 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.072805 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.167734 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.167787 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.200542 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:21:32 crc kubenswrapper[5116]: I1212 16:21:32.215367 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:33 crc kubenswrapper[5116]: I1212 16:21:33.200941 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qwx8r" Dec 12 16:21:49 crc kubenswrapper[5116]: I1212 16:21:49.416190 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:21:49 crc kubenswrapper[5116]: I1212 16:21:49.416768 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:19 crc kubenswrapper[5116]: I1212 16:22:19.416781 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:22:19 crc kubenswrapper[5116]: I1212 16:22:19.418288 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.415810 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.416487 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.416566 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.417377 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.417447 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed" gracePeriod=600 Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.746419 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed" exitCode=0 Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.746584 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed"} Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.746893 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483"} Dec 12 16:22:49 crc kubenswrapper[5116]: I1212 16:22:49.746934 5116 scope.go:117] "RemoveContainer" containerID="34e83b3f8658f7c6542f9117e7d8c3bad4d609b1d06da1e4fc4f1c0cff203b02" Dec 12 16:24:49 crc kubenswrapper[5116]: I1212 16:24:49.417043 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:24:49 crc kubenswrapper[5116]: I1212 16:24:49.418172 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:25:06 crc kubenswrapper[5116]: I1212 16:25:06.290161 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:25:06 crc kubenswrapper[5116]: I1212 16:25:06.293838 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:25:06 crc kubenswrapper[5116]: I1212 16:25:06.333455 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:25:06 crc kubenswrapper[5116]: I1212 16:25:06.335634 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:25:19 crc kubenswrapper[5116]: I1212 16:25:19.416473 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:25:19 crc kubenswrapper[5116]: I1212 16:25:19.417462 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:25:25 crc kubenswrapper[5116]: I1212 16:25:25.758704 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40798: no serving certificate available for the kubelet" Dec 12 16:25:49 crc kubenswrapper[5116]: I1212 16:25:49.416474 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:25:49 crc kubenswrapper[5116]: I1212 16:25:49.417504 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:25:49 crc kubenswrapper[5116]: I1212 16:25:49.417613 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:25:49 crc kubenswrapper[5116]: I1212 16:25:49.418630 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:25:49 crc kubenswrapper[5116]: I1212 16:25:49.418734 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483" gracePeriod=600 Dec 12 16:25:50 crc kubenswrapper[5116]: I1212 16:25:50.051717 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483" exitCode=0 Dec 12 16:25:50 crc kubenswrapper[5116]: I1212 16:25:50.056990 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483"} Dec 12 16:25:50 crc kubenswrapper[5116]: I1212 16:25:50.057086 5116 scope.go:117] "RemoveContainer" containerID="2367f59b8be684e352302e40e8e4ed942c4d59416c1f661e7b3cdedee78bc7ed" Dec 12 16:25:50 crc kubenswrapper[5116]: I1212 16:25:50.070871 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:25:51 crc kubenswrapper[5116]: I1212 16:25:51.061597 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb"} Dec 12 16:26:13 crc kubenswrapper[5116]: I1212 16:26:13.783856 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw"] Dec 12 16:26:13 crc kubenswrapper[5116]: I1212 16:26:13.784951 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="kube-rbac-proxy" containerID="cri-o://6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" gracePeriod=30 Dec 12 16:26:13 crc kubenswrapper[5116]: I1212 16:26:13.785151 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="ovnkube-cluster-manager" containerID="cri-o://5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.006244 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fg2lh"] Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007330 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-controller" containerID="cri-o://3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007363 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007471 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="nbdb" containerID="cri-o://9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007416 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="sbdb" containerID="cri-o://6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007695 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-node" containerID="cri-o://c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007633 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="northd" containerID="cri-o://84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.007562 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-acl-logging" containerID="cri-o://33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.014894 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.048697 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovnkube-controller" containerID="cri-o://e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" gracePeriod=30 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.064707 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6"] Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065649 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="kube-rbac-proxy" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065689 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="kube-rbac-proxy" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065711 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="ovnkube-cluster-manager" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065810 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="ovnkube-cluster-manager" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065941 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="kube-rbac-proxy" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.065993 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerName="ovnkube-cluster-manager" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.071185 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.139152 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert\") pod \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.139246 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config\") pod \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.140570 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3252cf25-4bc0-4262-923c-20bb5a19f1cb" (UID: "3252cf25-4bc0-4262-923c-20bb5a19f1cb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.144181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides\") pod \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.144831 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3252cf25-4bc0-4262-923c-20bb5a19f1cb" (UID: "3252cf25-4bc0-4262-923c-20bb5a19f1cb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.144907 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-str5m\" (UniqueName: \"kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m\") pod \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\" (UID: \"3252cf25-4bc0-4262-923c-20bb5a19f1cb\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.145096 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.145347 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.146458 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.146666 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptnr\" (UniqueName: \"kubernetes.io/projected/4e68749a-c333-4d0d-8334-a5a221ffa1ab-kube-api-access-fptnr\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.146804 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.146860 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.150453 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "3252cf25-4bc0-4262-923c-20bb5a19f1cb" (UID: "3252cf25-4bc0-4262-923c-20bb5a19f1cb"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.153559 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m" (OuterVolumeSpecName: "kube-api-access-str5m") pod "3252cf25-4bc0-4262-923c-20bb5a19f1cb" (UID: "3252cf25-4bc0-4262-923c-20bb5a19f1cb"). InnerVolumeSpecName "kube-api-access-str5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.219955 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-acl-logging/0.log" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220383 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-controller/0.log" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220755 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" exitCode=0 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220779 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" exitCode=0 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220786 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" exitCode=143 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220793 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" exitCode=143 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220864 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220893 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220902 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.220912 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.222837 5116 generic.go:358] "Generic (PLEG): container finished" podID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerID="5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" exitCode=0 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.222857 5116 generic.go:358] "Generic (PLEG): container finished" podID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" containerID="6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" exitCode=0 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.222967 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerDied","Data":"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.222985 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerDied","Data":"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.222995 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" event={"ID":"3252cf25-4bc0-4262-923c-20bb5a19f1cb","Type":"ContainerDied","Data":"0a618221781dd879f5453e177b4a81c2b41d0a2aba7e6c00bf515c3c346b7df3"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.223014 5116 scope.go:117] "RemoveContainer" containerID="5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.223182 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.238819 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.238887 5116 generic.go:358] "Generic (PLEG): container finished" podID="0e71d710-0829-4655-b88f-9318b9776228" containerID="9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a" exitCode=2 Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.239101 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bphkq" event={"ID":"0e71d710-0829-4655-b88f-9318b9776228","Type":"ContainerDied","Data":"9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a"} Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.241125 5116 scope.go:117] "RemoveContainer" containerID="9540b17566046fb638ab4dbd4aa4867f47411b075002579df0ed875ddde9508a" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.250952 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fptnr\" (UniqueName: \"kubernetes.io/projected/4e68749a-c333-4d0d-8334-a5a221ffa1ab-kube-api-access-fptnr\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.251071 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.251187 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.251247 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.251318 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-str5m\" (UniqueName: \"kubernetes.io/projected/3252cf25-4bc0-4262-923c-20bb5a19f1cb-kube-api-access-str5m\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.251330 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3252cf25-4bc0-4262-923c-20bb5a19f1cb-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.255611 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.256535 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.261984 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e68749a-c333-4d0d-8334-a5a221ffa1ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.272296 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptnr\" (UniqueName: \"kubernetes.io/projected/4e68749a-c333-4d0d-8334-a5a221ffa1ab-kube-api-access-fptnr\") pod \"ovnkube-control-plane-97c9b6c48-q44h6\" (UID: \"4e68749a-c333-4d0d-8334-a5a221ffa1ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.280164 5116 scope.go:117] "RemoveContainer" containerID="6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.291591 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw"] Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.295234 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-fl6jw"] Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.301419 5116 scope.go:117] "RemoveContainer" containerID="5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" Dec 12 16:26:14 crc kubenswrapper[5116]: E1212 16:26:14.301971 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\": container with ID starting with 5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3 not found: ID does not exist" containerID="5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.302039 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3"} err="failed to get container status \"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\": rpc error: code = NotFound desc = could not find container \"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\": container with ID starting with 5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3 not found: ID does not exist" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.302084 5116 scope.go:117] "RemoveContainer" containerID="6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" Dec 12 16:26:14 crc kubenswrapper[5116]: E1212 16:26:14.302870 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\": container with ID starting with 6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3 not found: ID does not exist" containerID="6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.302938 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3"} err="failed to get container status \"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\": rpc error: code = NotFound desc = could not find container \"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\": container with ID starting with 6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3 not found: ID does not exist" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.302977 5116 scope.go:117] "RemoveContainer" containerID="5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.303511 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3"} err="failed to get container status \"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\": rpc error: code = NotFound desc = could not find container \"5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3\": container with ID starting with 5ef556870e7c49c46c75cccab1a151653fa781dc388ceb513e9ccc03c5176ef3 not found: ID does not exist" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.303548 5116 scope.go:117] "RemoveContainer" containerID="6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.303840 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3"} err="failed to get container status \"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\": rpc error: code = NotFound desc = could not find container \"6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3\": container with ID starting with 6dfddec40c30fe6a1a4cb046a7c37b50ff875070eca646db30885c76834145e3 not found: ID does not exist" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.315603 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-acl-logging/0.log" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.316527 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-controller/0.log" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.317158 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354267 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354338 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354356 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354373 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354394 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354442 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354462 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354477 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354503 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354561 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgwxf\" (UniqueName: \"kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354585 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354655 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354683 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354701 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354728 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354788 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354849 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354897 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354962 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.354989 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns\") pod \"789dbc62-9a37-4521-89a5-476e80e7beb6\" (UID: \"789dbc62-9a37-4521-89a5-476e80e7beb6\") " Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.355469 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.357721 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358055 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358097 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash" (OuterVolumeSpecName: "host-slash") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358079 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358154 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358193 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket" (OuterVolumeSpecName: "log-socket") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358227 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358212 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358278 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358255 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log" (OuterVolumeSpecName: "node-log") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358326 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358779 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358841 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358875 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.358896 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.359076 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.363648 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.366005 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf" (OuterVolumeSpecName: "kube-api-access-tgwxf") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "kube-api-access-tgwxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376165 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66szh"] Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376811 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kubecfg-setup" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376824 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kubecfg-setup" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376837 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="northd" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376843 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="northd" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376850 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="nbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376856 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="nbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376866 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-node" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376872 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-node" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376881 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-acl-logging" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376887 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-acl-logging" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376897 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376904 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376915 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="sbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376920 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="sbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376930 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovnkube-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376935 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovnkube-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376947 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.376954 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377052 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377063 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="northd" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377070 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="sbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377078 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377086 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="nbdb" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377093 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="kube-rbac-proxy-node" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377099 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovn-acl-logging" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.377125 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerName="ovnkube-controller" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.381667 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "789dbc62-9a37-4521-89a5-476e80e7beb6" (UID: "789dbc62-9a37-4521-89a5-476e80e7beb6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.387918 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.454182 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456519 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-config\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456554 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-kubelet\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456598 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-netns\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456615 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-systemd-units\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456646 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456816 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-log-socket\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456881 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-bin\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456920 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-ovn\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.456983 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-script-lib\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457021 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-netd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457062 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovn-node-metrics-cert\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457093 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-systemd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457143 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-var-lib-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457206 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-slash\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457248 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-node-log\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457372 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-env-overrides\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457418 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457450 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457519 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grh2b\" (UniqueName: \"kubernetes.io/projected/44aedd49-77f2-488d-a7c3-c25b657a6b9f-kube-api-access-grh2b\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457552 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-etc-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457652 5116 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457679 5116 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457693 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457709 5116 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457721 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457733 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457748 5116 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457761 5116 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457775 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457787 5116 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457799 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/789dbc62-9a37-4521-89a5-476e80e7beb6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457811 5116 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457824 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457835 5116 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457848 5116 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457860 5116 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457873 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457885 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/789dbc62-9a37-4521-89a5-476e80e7beb6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457897 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgwxf\" (UniqueName: \"kubernetes.io/projected/789dbc62-9a37-4521-89a5-476e80e7beb6-kube-api-access-tgwxf\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.457910 5116 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/789dbc62-9a37-4521-89a5-476e80e7beb6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:14 crc kubenswrapper[5116]: W1212 16:26:14.474944 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e68749a_c333_4d0d_8334_a5a221ffa1ab.slice/crio-7c12bc02f0abefe766b4c4bb72a73d29180e83e791b4f95c6497927c48def90c WatchSource:0}: Error finding container 7c12bc02f0abefe766b4c4bb72a73d29180e83e791b4f95c6497927c48def90c: Status 404 returned error can't find the container with id 7c12bc02f0abefe766b4c4bb72a73d29180e83e791b4f95c6497927c48def90c Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559710 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-env-overrides\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559775 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559805 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559901 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grh2b\" (UniqueName: \"kubernetes.io/projected/44aedd49-77f2-488d-a7c3-c25b657a6b9f-kube-api-access-grh2b\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559952 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.559978 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-etc-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560056 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-config\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560096 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-etc-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560166 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-kubelet\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560040 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560098 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-kubelet\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560359 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-netns\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560389 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-systemd-units\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560417 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560482 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-log-socket\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560509 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-bin\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560534 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-ovn\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560588 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-script-lib\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560605 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-env-overrides\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560616 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-netd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560625 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560653 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-netd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560673 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-run-netns\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560688 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-log-socket\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560670 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovn-node-metrics-cert\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560717 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-cni-bin\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560727 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-systemd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560748 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-ovn\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560751 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-var-lib-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560803 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-var-lib-openvswitch\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560823 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-slash\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560826 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-config\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560870 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-run-systemd\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560871 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-node-log\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560889 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-node-log\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560718 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-systemd-units\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.560926 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44aedd49-77f2-488d-a7c3-c25b657a6b9f-host-slash\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.561481 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovnkube-script-lib\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.570058 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44aedd49-77f2-488d-a7c3-c25b657a6b9f-ovn-node-metrics-cert\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.578837 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grh2b\" (UniqueName: \"kubernetes.io/projected/44aedd49-77f2-488d-a7c3-c25b657a6b9f-kube-api-access-grh2b\") pod \"ovnkube-node-66szh\" (UID: \"44aedd49-77f2-488d-a7c3-c25b657a6b9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: I1212 16:26:14.705506 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:14 crc kubenswrapper[5116]: W1212 16:26:14.730024 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44aedd49_77f2_488d_a7c3_c25b657a6b9f.slice/crio-e0d8d6f1249c422537aacbda8b1c7f8c0ec8662eca1c6b50fb53a3d94678fb35 WatchSource:0}: Error finding container e0d8d6f1249c422537aacbda8b1c7f8c0ec8662eca1c6b50fb53a3d94678fb35: Status 404 returned error can't find the container with id e0d8d6f1249c422537aacbda8b1c7f8c0ec8662eca1c6b50fb53a3d94678fb35 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.248643 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" event={"ID":"4e68749a-c333-4d0d-8334-a5a221ffa1ab","Type":"ContainerStarted","Data":"0a952bce2de2524f240f745b66a847cefbfee26a1b00469d55d405c7d0e15aeb"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.249011 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" event={"ID":"4e68749a-c333-4d0d-8334-a5a221ffa1ab","Type":"ContainerStarted","Data":"f9de0b0f764e04fb918fba86ef115ad8505c9fa5d3af408868bbc567cf61db8e"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.249022 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" event={"ID":"4e68749a-c333-4d0d-8334-a5a221ffa1ab","Type":"ContainerStarted","Data":"7c12bc02f0abefe766b4c4bb72a73d29180e83e791b4f95c6497927c48def90c"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.252728 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-acl-logging/0.log" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253248 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fg2lh_789dbc62-9a37-4521-89a5-476e80e7beb6/ovn-controller/0.log" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253627 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" exitCode=0 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253659 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" exitCode=0 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253669 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" exitCode=0 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253677 5116 generic.go:358] "Generic (PLEG): container finished" podID="789dbc62-9a37-4521-89a5-476e80e7beb6" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" exitCode=0 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253766 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253816 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253830 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253842 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253852 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" event={"ID":"789dbc62-9a37-4521-89a5-476e80e7beb6","Type":"ContainerDied","Data":"50a67f3807b20fb39764c234f4968121e7ec8b83d8be1ff90efe3027e07e98c6"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253873 5116 scope.go:117] "RemoveContainer" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.253887 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2lh" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.256668 5116 generic.go:358] "Generic (PLEG): container finished" podID="44aedd49-77f2-488d-a7c3-c25b657a6b9f" containerID="e5282b3b212f346d3ad3bfc52d06ec742ff2602282f1e64dbe1db262fde99a7a" exitCode=0 Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.256798 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerDied","Data":"e5282b3b212f346d3ad3bfc52d06ec742ff2602282f1e64dbe1db262fde99a7a"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.256843 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"e0d8d6f1249c422537aacbda8b1c7f8c0ec8662eca1c6b50fb53a3d94678fb35"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.259825 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.259990 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bphkq" event={"ID":"0e71d710-0829-4655-b88f-9318b9776228","Type":"ContainerStarted","Data":"c9c1a82ea1f3fc1d87e19aaf34925bfcbf121820056337ce69bae091f3cb2c7f"} Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.270496 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-q44h6" podStartSLOduration=2.2704773830000002 podStartE2EDuration="2.270477383s" podCreationTimestamp="2025-12-12 16:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:26:15.268300915 +0000 UTC m=+669.732512691" watchObservedRunningTime="2025-12-12 16:26:15.270477383 +0000 UTC m=+669.734689139" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.276600 5116 scope.go:117] "RemoveContainer" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.307091 5116 scope.go:117] "RemoveContainer" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.345135 5116 scope.go:117] "RemoveContainer" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.362242 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fg2lh"] Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.366187 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fg2lh"] Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.383413 5116 scope.go:117] "RemoveContainer" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.402689 5116 scope.go:117] "RemoveContainer" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.417284 5116 scope.go:117] "RemoveContainer" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.433182 5116 scope.go:117] "RemoveContainer" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.455858 5116 scope.go:117] "RemoveContainer" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.491802 5116 scope.go:117] "RemoveContainer" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.493220 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": container with ID starting with e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b not found: ID does not exist" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493258 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} err="failed to get container status \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": rpc error: code = NotFound desc = could not find container \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": container with ID starting with e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493279 5116 scope.go:117] "RemoveContainer" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.493538 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": container with ID starting with 6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a not found: ID does not exist" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493562 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} err="failed to get container status \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": rpc error: code = NotFound desc = could not find container \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": container with ID starting with 6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493574 5116 scope.go:117] "RemoveContainer" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.493879 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": container with ID starting with 9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2 not found: ID does not exist" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493899 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} err="failed to get container status \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": rpc error: code = NotFound desc = could not find container \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": container with ID starting with 9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.493910 5116 scope.go:117] "RemoveContainer" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.494302 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": container with ID starting with 84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550 not found: ID does not exist" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494324 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} err="failed to get container status \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": rpc error: code = NotFound desc = could not find container \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": container with ID starting with 84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494338 5116 scope.go:117] "RemoveContainer" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.494580 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": container with ID starting with ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161 not found: ID does not exist" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494596 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} err="failed to get container status \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": rpc error: code = NotFound desc = could not find container \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": container with ID starting with ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494608 5116 scope.go:117] "RemoveContainer" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.494781 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": container with ID starting with c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e not found: ID does not exist" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494797 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} err="failed to get container status \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": rpc error: code = NotFound desc = could not find container \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": container with ID starting with c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.494810 5116 scope.go:117] "RemoveContainer" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.494990 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": container with ID starting with 33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197 not found: ID does not exist" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495013 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} err="failed to get container status \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": rpc error: code = NotFound desc = could not find container \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": container with ID starting with 33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495025 5116 scope.go:117] "RemoveContainer" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.495262 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": container with ID starting with 3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc not found: ID does not exist" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495283 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} err="failed to get container status \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": rpc error: code = NotFound desc = could not find container \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": container with ID starting with 3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495298 5116 scope.go:117] "RemoveContainer" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: E1212 16:26:15.495870 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": container with ID starting with c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243 not found: ID does not exist" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495900 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243"} err="failed to get container status \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": rpc error: code = NotFound desc = could not find container \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": container with ID starting with c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.495917 5116 scope.go:117] "RemoveContainer" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496177 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} err="failed to get container status \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": rpc error: code = NotFound desc = could not find container \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": container with ID starting with e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496204 5116 scope.go:117] "RemoveContainer" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496538 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} err="failed to get container status \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": rpc error: code = NotFound desc = could not find container \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": container with ID starting with 6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496564 5116 scope.go:117] "RemoveContainer" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496881 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} err="failed to get container status \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": rpc error: code = NotFound desc = could not find container \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": container with ID starting with 9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.496909 5116 scope.go:117] "RemoveContainer" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497371 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} err="failed to get container status \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": rpc error: code = NotFound desc = could not find container \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": container with ID starting with 84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497400 5116 scope.go:117] "RemoveContainer" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497647 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} err="failed to get container status \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": rpc error: code = NotFound desc = could not find container \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": container with ID starting with ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497674 5116 scope.go:117] "RemoveContainer" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497887 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} err="failed to get container status \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": rpc error: code = NotFound desc = could not find container \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": container with ID starting with c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.497911 5116 scope.go:117] "RemoveContainer" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498177 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} err="failed to get container status \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": rpc error: code = NotFound desc = could not find container \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": container with ID starting with 33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498223 5116 scope.go:117] "RemoveContainer" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498453 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} err="failed to get container status \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": rpc error: code = NotFound desc = could not find container \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": container with ID starting with 3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498478 5116 scope.go:117] "RemoveContainer" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498817 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243"} err="failed to get container status \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": rpc error: code = NotFound desc = could not find container \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": container with ID starting with c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.498839 5116 scope.go:117] "RemoveContainer" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499059 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} err="failed to get container status \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": rpc error: code = NotFound desc = could not find container \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": container with ID starting with e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499102 5116 scope.go:117] "RemoveContainer" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499485 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} err="failed to get container status \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": rpc error: code = NotFound desc = could not find container \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": container with ID starting with 6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499516 5116 scope.go:117] "RemoveContainer" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499724 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} err="failed to get container status \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": rpc error: code = NotFound desc = could not find container \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": container with ID starting with 9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499749 5116 scope.go:117] "RemoveContainer" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499946 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} err="failed to get container status \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": rpc error: code = NotFound desc = could not find container \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": container with ID starting with 84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.499970 5116 scope.go:117] "RemoveContainer" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.500324 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} err="failed to get container status \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": rpc error: code = NotFound desc = could not find container \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": container with ID starting with ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.500347 5116 scope.go:117] "RemoveContainer" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.500896 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} err="failed to get container status \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": rpc error: code = NotFound desc = could not find container \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": container with ID starting with c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.500922 5116 scope.go:117] "RemoveContainer" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501172 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} err="failed to get container status \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": rpc error: code = NotFound desc = could not find container \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": container with ID starting with 33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501196 5116 scope.go:117] "RemoveContainer" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501391 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} err="failed to get container status \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": rpc error: code = NotFound desc = could not find container \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": container with ID starting with 3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501413 5116 scope.go:117] "RemoveContainer" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501615 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243"} err="failed to get container status \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": rpc error: code = NotFound desc = could not find container \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": container with ID starting with c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.501637 5116 scope.go:117] "RemoveContainer" containerID="e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.502253 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b"} err="failed to get container status \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": rpc error: code = NotFound desc = could not find container \"e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b\": container with ID starting with e58713539309568ebef8ab0d82ead428ee12f100516cdc80185e4dc48a273a7b not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.502283 5116 scope.go:117] "RemoveContainer" containerID="6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.502563 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a"} err="failed to get container status \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": rpc error: code = NotFound desc = could not find container \"6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a\": container with ID starting with 6089d8d73545ee991d3cdf31daaaa3259c8883b0babd99bc2778e6a06318b11a not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.502588 5116 scope.go:117] "RemoveContainer" containerID="9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503021 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2"} err="failed to get container status \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": rpc error: code = NotFound desc = could not find container \"9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2\": container with ID starting with 9bd610e260df9370dece4dcb27914081d310084cdb36bf8e07b53ea56113ccf2 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503053 5116 scope.go:117] "RemoveContainer" containerID="84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503350 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550"} err="failed to get container status \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": rpc error: code = NotFound desc = could not find container \"84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550\": container with ID starting with 84ed22bbc04e65d05c4d58e97376826996fdda9a79e23f4029f4fdadca267550 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503377 5116 scope.go:117] "RemoveContainer" containerID="ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503603 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161"} err="failed to get container status \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": rpc error: code = NotFound desc = could not find container \"ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161\": container with ID starting with ae0bfb47ad34ee3d601e2a9efd16c2c9623299bc1b8d48e996b181094699f161 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503627 5116 scope.go:117] "RemoveContainer" containerID="c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.503936 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e"} err="failed to get container status \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": rpc error: code = NotFound desc = could not find container \"c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e\": container with ID starting with c8a3e5300dad9e51ac718c519eff2d1170061341f4d584cf3833f7614a0b665e not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.504003 5116 scope.go:117] "RemoveContainer" containerID="33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.504263 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197"} err="failed to get container status \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": rpc error: code = NotFound desc = could not find container \"33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197\": container with ID starting with 33206fb4baf0fd7a2adbeb2480ab5cf68c405732990b5f07cdf0a60f1c121197 not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.504290 5116 scope.go:117] "RemoveContainer" containerID="3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.504631 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc"} err="failed to get container status \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": rpc error: code = NotFound desc = could not find container \"3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc\": container with ID starting with 3170e131627f893a4f25e270de20184726de0f31b1c47cac07f660077adc5afc not found: ID does not exist" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.504649 5116 scope.go:117] "RemoveContainer" containerID="c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243" Dec 12 16:26:15 crc kubenswrapper[5116]: I1212 16:26:15.505006 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243"} err="failed to get container status \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": rpc error: code = NotFound desc = could not find container \"c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243\": container with ID starting with c223dc99994e5ab02787c9611925dfea3df6c57bdf5bb2d67ad62fb0424c9243 not found: ID does not exist" Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.054489 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3252cf25-4bc0-4262-923c-20bb5a19f1cb" path="/var/lib/kubelet/pods/3252cf25-4bc0-4262-923c-20bb5a19f1cb/volumes" Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.055805 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789dbc62-9a37-4521-89a5-476e80e7beb6" path="/var/lib/kubelet/pods/789dbc62-9a37-4521-89a5-476e80e7beb6/volumes" Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272177 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"d350c433127b79bbcdce8159bd1ca5dcd616ab31ebbdcb6011e4b246a9b669c2"} Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272221 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"36f526ff970b6df3ab2d288d096a329e1f6b207be4c338c1efc20d7e6b494b9e"} Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272234 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"5516844b5107edf8c3d926aba00ad2b23118cc160802ddcafc9dd80dda323cd8"} Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272243 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"5f630948ca3e65f47b58dff016f3385c2326faad271922be40f192850b4031ec"} Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272252 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"33eb630baf152f0ce603ed342a950e6d6a67b016132e7701f7e29d0979e4ed66"} Dec 12 16:26:16 crc kubenswrapper[5116]: I1212 16:26:16.272265 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"a6798ad468e61eff1f45d5e8055538ab00657679507f2c12105e5d38563b3f63"} Dec 12 16:26:19 crc kubenswrapper[5116]: I1212 16:26:19.300739 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"0d569e2f11283d0b9f416a0cf44f333f44e72fb3b7a3afe50ff0b98d523e022c"} Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.324790 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" event={"ID":"44aedd49-77f2-488d-a7c3-c25b657a6b9f","Type":"ContainerStarted","Data":"144da7c02f8f0031fedc41428490c5ce80c2067ba3646fc0ea7b8d23de565d94"} Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.325747 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.325771 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.325783 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.355988 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.358658 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" podStartSLOduration=8.358637516 podStartE2EDuration="8.358637516s" podCreationTimestamp="2025-12-12 16:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:26:22.357908446 +0000 UTC m=+676.822120202" watchObservedRunningTime="2025-12-12 16:26:22.358637516 +0000 UTC m=+676.822849272" Dec 12 16:26:22 crc kubenswrapper[5116]: I1212 16:26:22.363907 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:26:54 crc kubenswrapper[5116]: I1212 16:26:54.362760 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66szh" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.204337 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.205324 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4rfj8" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="registry-server" containerID="cri-o://bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814" gracePeriod=30 Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.569342 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.657435 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prcw8\" (UniqueName: \"kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8\") pod \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.657504 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities\") pod \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.657610 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content\") pod \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\" (UID: \"9c869e6b-8812-4b02-8c2e-720bed5f6ec7\") " Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.659484 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities" (OuterVolumeSpecName: "utilities") pod "9c869e6b-8812-4b02-8c2e-720bed5f6ec7" (UID: "9c869e6b-8812-4b02-8c2e-720bed5f6ec7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.669510 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8" (OuterVolumeSpecName: "kube-api-access-prcw8") pod "9c869e6b-8812-4b02-8c2e-720bed5f6ec7" (UID: "9c869e6b-8812-4b02-8c2e-720bed5f6ec7"). InnerVolumeSpecName "kube-api-access-prcw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.672152 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c869e6b-8812-4b02-8c2e-720bed5f6ec7" (UID: "9c869e6b-8812-4b02-8c2e-720bed5f6ec7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.731450 5116 generic.go:358] "Generic (PLEG): container finished" podID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerID="bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814" exitCode=0 Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.731549 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerDied","Data":"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814"} Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.731623 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rfj8" event={"ID":"9c869e6b-8812-4b02-8c2e-720bed5f6ec7","Type":"ContainerDied","Data":"d2ac77df8df44b28270a46f6744e8fbf32b7cf07683ecffe40d92cfa900d8edb"} Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.731570 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rfj8" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.731644 5116 scope.go:117] "RemoveContainer" containerID="bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.747417 5116 scope.go:117] "RemoveContainer" containerID="54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.763477 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.763522 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prcw8\" (UniqueName: \"kubernetes.io/projected/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-kube-api-access-prcw8\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.763535 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c869e6b-8812-4b02-8c2e-720bed5f6ec7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.768187 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.768200 5116 scope.go:117] "RemoveContainer" containerID="92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.772133 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rfj8"] Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.785500 5116 scope.go:117] "RemoveContainer" containerID="bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814" Dec 12 16:27:21 crc kubenswrapper[5116]: E1212 16:27:21.786044 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814\": container with ID starting with bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814 not found: ID does not exist" containerID="bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.786090 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814"} err="failed to get container status \"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814\": rpc error: code = NotFound desc = could not find container \"bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814\": container with ID starting with bc990ba8ba04b7a724e92a4b8b912ceacad03eda069655fb7bf2e305762ed814 not found: ID does not exist" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.786145 5116 scope.go:117] "RemoveContainer" containerID="54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75" Dec 12 16:27:21 crc kubenswrapper[5116]: E1212 16:27:21.786695 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75\": container with ID starting with 54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75 not found: ID does not exist" containerID="54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.786741 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75"} err="failed to get container status \"54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75\": rpc error: code = NotFound desc = could not find container \"54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75\": container with ID starting with 54afb190a77bd3c656171ce1aefefd8ae4b38cb6b9d34d1495e13e78cd7cba75 not found: ID does not exist" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.786768 5116 scope.go:117] "RemoveContainer" containerID="92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c" Dec 12 16:27:21 crc kubenswrapper[5116]: E1212 16:27:21.787160 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c\": container with ID starting with 92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c not found: ID does not exist" containerID="92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c" Dec 12 16:27:21 crc kubenswrapper[5116]: I1212 16:27:21.787215 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c"} err="failed to get container status \"92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c\": rpc error: code = NotFound desc = could not find container \"92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c\": container with ID starting with 92f5466c396bd44ccba9a0021767d9815026cad5690b5ab71b211abb9871d46c not found: ID does not exist" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.060739 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" path="/var/lib/kubelet/pods/9c869e6b-8812-4b02-8c2e-720bed5f6ec7/volumes" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223216 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dhxcw"] Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223832 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223847 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223857 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="extract-utilities" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223863 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="extract-utilities" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223874 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="extract-content" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223879 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="extract-content" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.223966 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c869e6b-8812-4b02-8c2e-720bed5f6ec7" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.234005 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.236953 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dhxcw"] Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.370947 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-trusted-ca\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371002 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e657138d-22ee-4db3-86be-54a42edb3805-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371035 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-registry-tls\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371072 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371129 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e657138d-22ee-4db3-86be-54a42edb3805-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371162 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371431 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664x6\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-kube-api-access-664x6\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.371540 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-registry-certificates\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.394316 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473053 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-664x6\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-kube-api-access-664x6\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473148 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-registry-certificates\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473189 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-trusted-ca\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473523 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e657138d-22ee-4db3-86be-54a42edb3805-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473713 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-registry-tls\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473879 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.473965 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e657138d-22ee-4db3-86be-54a42edb3805-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.474662 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e657138d-22ee-4db3-86be-54a42edb3805-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.474792 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-trusted-ca\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.474917 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e657138d-22ee-4db3-86be-54a42edb3805-registry-certificates\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.482184 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-registry-tls\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.482767 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e657138d-22ee-4db3-86be-54a42edb3805-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.494878 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-664x6\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-kube-api-access-664x6\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.499141 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e657138d-22ee-4db3-86be-54a42edb3805-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dhxcw\" (UID: \"e657138d-22ee-4db3-86be-54a42edb3805\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.550943 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:22 crc kubenswrapper[5116]: I1212 16:27:22.979043 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dhxcw"] Dec 12 16:27:23 crc kubenswrapper[5116]: I1212 16:27:23.748036 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" event={"ID":"e657138d-22ee-4db3-86be-54a42edb3805","Type":"ContainerStarted","Data":"44103437a4fe70d6e0f779fa14f330968d260995189ce9e062e1c6975fd44247"} Dec 12 16:27:23 crc kubenswrapper[5116]: I1212 16:27:23.750985 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:23 crc kubenswrapper[5116]: I1212 16:27:23.751152 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" event={"ID":"e657138d-22ee-4db3-86be-54a42edb3805","Type":"ContainerStarted","Data":"e867d3a7b3b3d84622a6e164808ba3d0cd4d066af963547b1f3ae792c821a685"} Dec 12 16:27:23 crc kubenswrapper[5116]: I1212 16:27:23.773445 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" podStartSLOduration=1.7734235790000001 podStartE2EDuration="1.773423579s" podCreationTimestamp="2025-12-12 16:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:27:23.771250021 +0000 UTC m=+738.235461787" watchObservedRunningTime="2025-12-12 16:27:23.773423579 +0000 UTC m=+738.237635335" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.153735 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz"] Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.174333 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.176965 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.179326 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz"] Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.227946 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.228433 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.228588 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx9g4\" (UniqueName: \"kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.330032 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.330456 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.330611 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vx9g4\" (UniqueName: \"kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.330745 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.330852 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.353810 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx9g4\" (UniqueName: \"kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.495382 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:25 crc kubenswrapper[5116]: I1212 16:27:25.996069 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz"] Dec 12 16:27:26 crc kubenswrapper[5116]: I1212 16:27:26.771059 5116 generic.go:358] "Generic (PLEG): container finished" podID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerID="39f1c27b09b95ec8a37bd353f17f3fcfb257d5b2d5d62e297eebe0cfc7b9df2e" exitCode=0 Dec 12 16:27:26 crc kubenswrapper[5116]: I1212 16:27:26.771221 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" event={"ID":"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0","Type":"ContainerDied","Data":"39f1c27b09b95ec8a37bd353f17f3fcfb257d5b2d5d62e297eebe0cfc7b9df2e"} Dec 12 16:27:26 crc kubenswrapper[5116]: I1212 16:27:26.771254 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" event={"ID":"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0","Type":"ContainerStarted","Data":"c1be9c9b595898bda04049883fe518358383a635c896097e8436798b3d0df38e"} Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.087384 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.100553 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.105952 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.175618 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znt4z\" (UniqueName: \"kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.175958 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.176142 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.277834 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.277914 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.278790 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.278795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.278892 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znt4z\" (UniqueName: \"kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.298094 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znt4z\" (UniqueName: \"kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z\") pod \"redhat-operators-t8kgk\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.428268 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.787848 5116 generic.go:358] "Generic (PLEG): container finished" podID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerID="a8b8bb541e003e8cfebdf7606b28e58198334fa89083c0ab54c1d0e17bea9b12" exitCode=0 Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.788085 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" event={"ID":"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0","Type":"ContainerDied","Data":"a8b8bb541e003e8cfebdf7606b28e58198334fa89083c0ab54c1d0e17bea9b12"} Dec 12 16:27:28 crc kubenswrapper[5116]: I1212 16:27:28.941278 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:28 crc kubenswrapper[5116]: W1212 16:27:28.952753 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a9c0026_8cfc_46a9_b1e4_c8153c66815b.slice/crio-b02e830cad12614decc5dcd91328cad7cc7be083856e04070529fc7948c2a112 WatchSource:0}: Error finding container b02e830cad12614decc5dcd91328cad7cc7be083856e04070529fc7948c2a112: Status 404 returned error can't find the container with id b02e830cad12614decc5dcd91328cad7cc7be083856e04070529fc7948c2a112 Dec 12 16:27:29 crc kubenswrapper[5116]: I1212 16:27:29.799080 5116 generic.go:358] "Generic (PLEG): container finished" podID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerID="5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c" exitCode=0 Dec 12 16:27:29 crc kubenswrapper[5116]: I1212 16:27:29.799184 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerDied","Data":"5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c"} Dec 12 16:27:29 crc kubenswrapper[5116]: I1212 16:27:29.799818 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerStarted","Data":"b02e830cad12614decc5dcd91328cad7cc7be083856e04070529fc7948c2a112"} Dec 12 16:27:29 crc kubenswrapper[5116]: I1212 16:27:29.803606 5116 generic.go:358] "Generic (PLEG): container finished" podID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerID="0ba31afcb76cfbb750d81ce2089be308e0be8619fb89c6b1d43b011de55f5e9f" exitCode=0 Dec 12 16:27:29 crc kubenswrapper[5116]: I1212 16:27:29.803712 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" event={"ID":"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0","Type":"ContainerDied","Data":"0ba31afcb76cfbb750d81ce2089be308e0be8619fb89c6b1d43b011de55f5e9f"} Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.163634 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.341845 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util\") pod \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.341914 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle\") pod \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.342021 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx9g4\" (UniqueName: \"kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4\") pod \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\" (UID: \"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0\") " Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.344704 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle" (OuterVolumeSpecName: "bundle") pod "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" (UID: "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.354300 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util" (OuterVolumeSpecName: "util") pod "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" (UID: "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.355314 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4" (OuterVolumeSpecName: "kube-api-access-vx9g4") pod "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" (UID: "937c4b6c-c1a0-4b30-879f-8adfeed2ecb0"). InnerVolumeSpecName "kube-api-access-vx9g4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.444029 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vx9g4\" (UniqueName: \"kubernetes.io/projected/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-kube-api-access-vx9g4\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.444071 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.444081 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/937c4b6c-c1a0-4b30-879f-8adfeed2ecb0-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.820191 5116 generic.go:358] "Generic (PLEG): container finished" podID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerID="b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c" exitCode=0 Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.820306 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerDied","Data":"b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c"} Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.825046 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" event={"ID":"937c4b6c-c1a0-4b30-879f-8adfeed2ecb0","Type":"ContainerDied","Data":"c1be9c9b595898bda04049883fe518358383a635c896097e8436798b3d0df38e"} Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.825068 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210cr7dz" Dec 12 16:27:31 crc kubenswrapper[5116]: I1212 16:27:31.825089 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1be9c9b595898bda04049883fe518358383a635c896097e8436798b3d0df38e" Dec 12 16:27:32 crc kubenswrapper[5116]: I1212 16:27:32.835422 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerStarted","Data":"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1"} Dec 12 16:27:32 crc kubenswrapper[5116]: I1212 16:27:32.859911 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8kgk" podStartSLOduration=3.933802305 podStartE2EDuration="4.859878733s" podCreationTimestamp="2025-12-12 16:27:28 +0000 UTC" firstStartedPulling="2025-12-12 16:27:29.801877333 +0000 UTC m=+744.266089089" lastFinishedPulling="2025-12-12 16:27:30.727953721 +0000 UTC m=+745.192165517" observedRunningTime="2025-12-12 16:27:32.854252691 +0000 UTC m=+747.318464447" watchObservedRunningTime="2025-12-12 16:27:32.859878733 +0000 UTC m=+747.324090529" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.683293 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684156 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="pull" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684174 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="pull" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684194 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="util" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684200 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="util" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684216 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="extract" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684222 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="extract" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.684329 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="937c4b6c-c1a0-4b30-879f-8adfeed2ecb0" containerName="extract" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.715453 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.715719 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.896902 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.896965 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.896989 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrxr\" (UniqueName: \"kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.998916 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.998994 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.999032 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrxr\" (UniqueName: \"kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.999836 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:34 crc kubenswrapper[5116]: I1212 16:27:34.999837 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.027093 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrxr\" (UniqueName: \"kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr\") pod \"community-operators-5kl4p\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.034643 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.346773 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42"] Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.363820 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42"] Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.363983 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.366445 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.375681 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:35 crc kubenswrapper[5116]: W1212 16:27:35.383176 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod712b5f0c_8943_4bc2_950c_4e310091dd69.slice/crio-76219c8ae3f8257e40200bcc4cd4c017e6c86eec22853299e6245d5ccf77e7b6 WatchSource:0}: Error finding container 76219c8ae3f8257e40200bcc4cd4c017e6c86eec22853299e6245d5ccf77e7b6: Status 404 returned error can't find the container with id 76219c8ae3f8257e40200bcc4cd4c017e6c86eec22853299e6245d5ccf77e7b6 Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.405506 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5bc\" (UniqueName: \"kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.405577 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.405717 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.507213 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.506588 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.507381 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5bc\" (UniqueName: \"kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.508379 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.508733 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.531174 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5bc\" (UniqueName: \"kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.705562 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.862925 5116 generic.go:358] "Generic (PLEG): container finished" podID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerID="f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617" exitCode=0 Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.863181 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerDied","Data":"f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617"} Dec 12 16:27:35 crc kubenswrapper[5116]: I1212 16:27:35.863221 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerStarted","Data":"76219c8ae3f8257e40200bcc4cd4c017e6c86eec22853299e6245d5ccf77e7b6"} Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.144866 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42"] Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.287083 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.341794 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.342022 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.420992 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgc8c\" (UniqueName: \"kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.421047 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.421086 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.522362 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgc8c\" (UniqueName: \"kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.522464 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.522540 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.523266 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.524038 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.547509 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgc8c\" (UniqueName: \"kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c\") pod \"certified-operators-mtx6q\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.657260 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.871332 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerStarted","Data":"e63cbaf8313682bfbbe173ed0a612d8d962fff36d2de36d7a521c9f251c8f445"} Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.871392 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerStarted","Data":"0948b855c6c5353e0f4de956a883a6ce03e21447fe246b2f49dabf54be3fa22e"} Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.944421 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4"] Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.950887 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:36 crc kubenswrapper[5116]: I1212 16:27:36.956758 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4"] Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.030304 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4tq9\" (UniqueName: \"kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.030652 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.030702 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.131481 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4tq9\" (UniqueName: \"kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.131539 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.131597 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.132000 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.132028 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.169595 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4tq9\" (UniqueName: \"kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.303202 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.312120 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.791429 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4"] Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.884881 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerStarted","Data":"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b"} Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.884938 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerStarted","Data":"06bd4829fad2045398a6aaec48d70303131f6365ecfe6425cef0fbd87b434b3c"} Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.886241 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerStarted","Data":"dbcbda270d6fdf78e96804f2303897279ae23a71a62b5e1987c7cc1833e8c878"} Dec 12 16:27:37 crc kubenswrapper[5116]: I1212 16:27:37.888493 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerStarted","Data":"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e"} Dec 12 16:27:38 crc kubenswrapper[5116]: I1212 16:27:38.429224 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:38 crc kubenswrapper[5116]: I1212 16:27:38.430392 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:38 crc kubenswrapper[5116]: I1212 16:27:38.485911 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:38 crc kubenswrapper[5116]: I1212 16:27:38.954202 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.880079 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.911002 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerID="552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b" exitCode=0 Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.911093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerDied","Data":"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b"} Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.913184 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerStarted","Data":"043c7e25caefb77beb2ee70114d7b2b231087d3b023ce729e4dd129da0198712"} Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.917462 5116 generic.go:358] "Generic (PLEG): container finished" podID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerID="930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e" exitCode=0 Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.917531 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerDied","Data":"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e"} Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.919581 5116 generic.go:358] "Generic (PLEG): container finished" podID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerID="e63cbaf8313682bfbbe173ed0a612d8d962fff36d2de36d7a521c9f251c8f445" exitCode=0 Dec 12 16:27:39 crc kubenswrapper[5116]: I1212 16:27:39.919642 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerDied","Data":"e63cbaf8313682bfbbe173ed0a612d8d962fff36d2de36d7a521c9f251c8f445"} Dec 12 16:27:40 crc kubenswrapper[5116]: I1212 16:27:40.930284 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerStarted","Data":"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e"} Dec 12 16:27:40 crc kubenswrapper[5116]: I1212 16:27:40.932350 5116 generic.go:358] "Generic (PLEG): container finished" podID="67605f64-4d7c-4434-a20a-746c4d62b504" containerID="043c7e25caefb77beb2ee70114d7b2b231087d3b023ce729e4dd129da0198712" exitCode=0 Dec 12 16:27:40 crc kubenswrapper[5116]: I1212 16:27:40.932661 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8kgk" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="registry-server" containerID="cri-o://540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1" gracePeriod=2 Dec 12 16:27:40 crc kubenswrapper[5116]: I1212 16:27:40.933178 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerDied","Data":"043c7e25caefb77beb2ee70114d7b2b231087d3b023ce729e4dd129da0198712"} Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.102232 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kl4p" podStartSLOduration=6.005248814 podStartE2EDuration="7.102209475s" podCreationTimestamp="2025-12-12 16:27:34 +0000 UTC" firstStartedPulling="2025-12-12 16:27:35.865325277 +0000 UTC m=+750.329537033" lastFinishedPulling="2025-12-12 16:27:36.962285938 +0000 UTC m=+751.426497694" observedRunningTime="2025-12-12 16:27:41.099935233 +0000 UTC m=+755.564146989" watchObservedRunningTime="2025-12-12 16:27:41.102209475 +0000 UTC m=+755.566421241" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.527774 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.624233 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities\") pod \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.624330 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znt4z\" (UniqueName: \"kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z\") pod \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.624370 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content\") pod \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\" (UID: \"8a9c0026-8cfc-46a9-b1e4-c8153c66815b\") " Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.625212 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities" (OuterVolumeSpecName: "utilities") pod "8a9c0026-8cfc-46a9-b1e4-c8153c66815b" (UID: "8a9c0026-8cfc-46a9-b1e4-c8153c66815b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.635322 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z" (OuterVolumeSpecName: "kube-api-access-znt4z") pod "8a9c0026-8cfc-46a9-b1e4-c8153c66815b" (UID: "8a9c0026-8cfc-46a9-b1e4-c8153c66815b"). InnerVolumeSpecName "kube-api-access-znt4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.728415 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.728463 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-znt4z\" (UniqueName: \"kubernetes.io/projected/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-kube-api-access-znt4z\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.760173 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a9c0026-8cfc-46a9-b1e4-c8153c66815b" (UID: "8a9c0026-8cfc-46a9-b1e4-c8153c66815b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.830204 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9c0026-8cfc-46a9-b1e4-c8153c66815b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.944686 5116 generic.go:358] "Generic (PLEG): container finished" podID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerID="540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1" exitCode=0 Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.944812 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8kgk" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.944836 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerDied","Data":"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1"} Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.944890 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8kgk" event={"ID":"8a9c0026-8cfc-46a9-b1e4-c8153c66815b","Type":"ContainerDied","Data":"b02e830cad12614decc5dcd91328cad7cc7be083856e04070529fc7948c2a112"} Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.944923 5116 scope.go:117] "RemoveContainer" containerID="540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.948606 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerID="c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a" exitCode=0 Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.948681 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerDied","Data":"c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a"} Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.961504 5116 generic.go:358] "Generic (PLEG): container finished" podID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerID="8aee03d5b0963d54bd5c96477305fb36e9431f6f06e2ef914173f7a48be1df87" exitCode=0 Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.961631 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerDied","Data":"8aee03d5b0963d54bd5c96477305fb36e9431f6f06e2ef914173f7a48be1df87"} Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.987322 5116 scope.go:117] "RemoveContainer" containerID="b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c" Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.988356 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:41 crc kubenswrapper[5116]: I1212 16:27:41.998518 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8kgk"] Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.009572 5116 scope.go:117] "RemoveContainer" containerID="5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.051679 5116 scope.go:117] "RemoveContainer" containerID="540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1" Dec 12 16:27:42 crc kubenswrapper[5116]: E1212 16:27:42.058488 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1\": container with ID starting with 540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1 not found: ID does not exist" containerID="540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.058540 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1"} err="failed to get container status \"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1\": rpc error: code = NotFound desc = could not find container \"540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1\": container with ID starting with 540c8de97f6eae486266d1c8c62348a5d259924c49d669bdb93aa7c44db68ce1 not found: ID does not exist" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.058569 5116 scope.go:117] "RemoveContainer" containerID="b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c" Dec 12 16:27:42 crc kubenswrapper[5116]: E1212 16:27:42.059949 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c\": container with ID starting with b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c not found: ID does not exist" containerID="b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.060010 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c"} err="failed to get container status \"b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c\": rpc error: code = NotFound desc = could not find container \"b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c\": container with ID starting with b0777c56dd0926b200b3664433d330cf6e04a30b39c02d71963ed714a4cf564c not found: ID does not exist" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.060057 5116 scope.go:117] "RemoveContainer" containerID="5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c" Dec 12 16:27:42 crc kubenswrapper[5116]: E1212 16:27:42.060337 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c\": container with ID starting with 5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c not found: ID does not exist" containerID="5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.060359 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c"} err="failed to get container status \"5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c\": rpc error: code = NotFound desc = could not find container \"5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c\": container with ID starting with 5ad63b130d22c26c2dccd9a3e02be925b14c528fe9287153d2173093d8c1f90c not found: ID does not exist" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.060751 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" path="/var/lib/kubelet/pods/8a9c0026-8cfc-46a9-b1e4-c8153c66815b/volumes" Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.975309 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerStarted","Data":"1a9d2d3c8a0241167c160b83cc366001438c6b5f2a5511294b084c4ecb2337a3"} Dec 12 16:27:42 crc kubenswrapper[5116]: I1212 16:27:42.982894 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerStarted","Data":"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576"} Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.009133 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" podStartSLOduration=7.012067916 podStartE2EDuration="8.009093675s" podCreationTimestamp="2025-12-12 16:27:35 +0000 UTC" firstStartedPulling="2025-12-12 16:27:39.921679653 +0000 UTC m=+754.385891409" lastFinishedPulling="2025-12-12 16:27:40.918705412 +0000 UTC m=+755.382917168" observedRunningTime="2025-12-12 16:27:43.001117201 +0000 UTC m=+757.465328977" watchObservedRunningTime="2025-12-12 16:27:43.009093675 +0000 UTC m=+757.473305431" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.035195 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtx6q" podStartSLOduration=6.028052257 podStartE2EDuration="7.035170868s" podCreationTimestamp="2025-12-12 16:27:36 +0000 UTC" firstStartedPulling="2025-12-12 16:27:39.912144567 +0000 UTC m=+754.376356313" lastFinishedPulling="2025-12-12 16:27:40.919263168 +0000 UTC m=+755.383474924" observedRunningTime="2025-12-12 16:27:43.031880299 +0000 UTC m=+757.496092085" watchObservedRunningTime="2025-12-12 16:27:43.035170868 +0000 UTC m=+757.499382624" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.216405 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wxlr2"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217312 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="extract-content" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217342 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="extract-content" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217361 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="registry-server" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217367 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="registry-server" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217403 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="extract-utilities" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217410 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="extract-utilities" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.217516 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a9c0026-8cfc-46a9-b1e4-c8153c66815b" containerName="registry-server" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.226792 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.229368 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-kppqn\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.230261 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.237590 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wxlr2"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.237807 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.266255 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bts9b\" (UniqueName: \"kubernetes.io/projected/8f0ebfba-04ab-41e7-980f-4a98169dd9bd-kube-api-access-bts9b\") pod \"obo-prometheus-operator-86648f486b-wxlr2\" (UID: \"8f0ebfba-04ab-41e7-980f-4a98169dd9bd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.367939 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bts9b\" (UniqueName: \"kubernetes.io/projected/8f0ebfba-04ab-41e7-980f-4a98169dd9bd-kube-api-access-bts9b\") pod \"obo-prometheus-operator-86648f486b-wxlr2\" (UID: \"8f0ebfba-04ab-41e7-980f-4a98169dd9bd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.369247 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.373972 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.380435 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.380618 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-9g7qd\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.382445 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.387661 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.390990 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.424212 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bts9b\" (UniqueName: \"kubernetes.io/projected/8f0ebfba-04ab-41e7-980f-4a98169dd9bd-kube-api-access-bts9b\") pod \"obo-prometheus-operator-86648f486b-wxlr2\" (UID: \"8f0ebfba-04ab-41e7-980f-4a98169dd9bd\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.428655 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.471973 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.472043 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.472186 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.472235 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.562978 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2d948"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.566272 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.573312 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.573375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.573438 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.573465 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.582729 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.583663 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.585412 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9530b301-aff0-413f-bd3d-f28b9627d579-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s\" (UID: \"9530b301-aff0-413f-bd3d-f28b9627d579\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.588690 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e65cc37-804d-4ecd-ab43-1fc8d7455d6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr\" (UID: \"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.589134 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.592970 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.593253 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-77hc6\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.618440 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2d948"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.675403 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g826\" (UniqueName: \"kubernetes.io/projected/8c823477-9c7a-4b69-8d4e-271f230fd395-kube-api-access-7g826\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.675541 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c823477-9c7a-4b69-8d4e-271f230fd395-observability-operator-tls\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.694681 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.710957 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.758338 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cq7l4"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.776072 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.778290 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c823477-9c7a-4b69-8d4e-271f230fd395-observability-operator-tls\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.778362 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g826\" (UniqueName: \"kubernetes.io/projected/8c823477-9c7a-4b69-8d4e-271f230fd395-kube-api-access-7g826\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.779433 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-7ng6r\"" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.781027 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cq7l4"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.789214 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c823477-9c7a-4b69-8d4e-271f230fd395-observability-operator-tls\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.802567 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g826\" (UniqueName: \"kubernetes.io/projected/8c823477-9c7a-4b69-8d4e-271f230fd395-kube-api-access-7g826\") pod \"observability-operator-78c97476f4-2d948\" (UID: \"8c823477-9c7a-4b69-8d4e-271f230fd395\") " pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.880148 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hxs\" (UniqueName: \"kubernetes.io/projected/2d588fc5-0499-4dfa-a30d-b2bdb266131c-kube-api-access-k2hxs\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.880222 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d588fc5-0499-4dfa-a30d-b2bdb266131c-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.927813 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wxlr2"] Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.971123 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.981403 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hxs\" (UniqueName: \"kubernetes.io/projected/2d588fc5-0499-4dfa-a30d-b2bdb266131c-kube-api-access-k2hxs\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.981481 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d588fc5-0499-4dfa-a30d-b2bdb266131c-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:43 crc kubenswrapper[5116]: I1212 16:27:43.988900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d588fc5-0499-4dfa-a30d-b2bdb266131c-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:44 crc kubenswrapper[5116]: I1212 16:27:44.001277 5116 generic.go:358] "Generic (PLEG): container finished" podID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerID="1a9d2d3c8a0241167c160b83cc366001438c6b5f2a5511294b084c4ecb2337a3" exitCode=0 Dec 12 16:27:44 crc kubenswrapper[5116]: I1212 16:27:44.001393 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerDied","Data":"1a9d2d3c8a0241167c160b83cc366001438c6b5f2a5511294b084c4ecb2337a3"} Dec 12 16:27:44 crc kubenswrapper[5116]: I1212 16:27:44.002624 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hxs\" (UniqueName: \"kubernetes.io/projected/2d588fc5-0499-4dfa-a30d-b2bdb266131c-kube-api-access-k2hxs\") pod \"perses-operator-68bdb49cbf-cq7l4\" (UID: \"2d588fc5-0499-4dfa-a30d-b2bdb266131c\") " pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:44 crc kubenswrapper[5116]: I1212 16:27:44.129571 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:27:45 crc kubenswrapper[5116]: I1212 16:27:45.035562 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:45 crc kubenswrapper[5116]: I1212 16:27:45.037487 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:45 crc kubenswrapper[5116]: I1212 16:27:45.091605 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:45 crc kubenswrapper[5116]: I1212 16:27:45.769743 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-dhxcw" Dec 12 16:27:45 crc kubenswrapper[5116]: I1212 16:27:45.862707 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.073596 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.551619 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.626919 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle\") pod \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.626977 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util\") pod \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.627092 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d5bc\" (UniqueName: \"kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc\") pod \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\" (UID: \"6f0fbdb8-16c0-45d1-b80e-801c79936e20\") " Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.627983 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle" (OuterVolumeSpecName: "bundle") pod "6f0fbdb8-16c0-45d1-b80e-801c79936e20" (UID: "6f0fbdb8-16c0-45d1-b80e-801c79936e20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.646310 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc" (OuterVolumeSpecName: "kube-api-access-7d5bc") pod "6f0fbdb8-16c0-45d1-b80e-801c79936e20" (UID: "6f0fbdb8-16c0-45d1-b80e-801c79936e20"). InnerVolumeSpecName "kube-api-access-7d5bc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.646915 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util" (OuterVolumeSpecName: "util") pod "6f0fbdb8-16c0-45d1-b80e-801c79936e20" (UID: "6f0fbdb8-16c0-45d1-b80e-801c79936e20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.662639 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.662688 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.729084 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7d5bc\" (UniqueName: \"kubernetes.io/projected/6f0fbdb8-16c0-45d1-b80e-801c79936e20-kube-api-access-7d5bc\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.729158 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.729168 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f0fbdb8-16c0-45d1-b80e-801c79936e20-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:46 crc kubenswrapper[5116]: I1212 16:27:46.780131 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.021473 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" event={"ID":"8f0ebfba-04ab-41e7-980f-4a98169dd9bd","Type":"ContainerStarted","Data":"e196690f5880cd30eb73e9a1dac35049efb8c0b2c16f5c872dac4bbd6de6424d"} Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.024577 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.024577 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ev6m42" event={"ID":"6f0fbdb8-16c0-45d1-b80e-801c79936e20","Type":"ContainerDied","Data":"0948b855c6c5353e0f4de956a883a6ce03e21447fe246b2f49dabf54be3fa22e"} Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.024633 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0948b855c6c5353e0f4de956a883a6ce03e21447fe246b2f49dabf54be3fa22e" Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.081882 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.783052 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s"] Dec 12 16:27:47 crc kubenswrapper[5116]: I1212 16:27:47.920037 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr"] Dec 12 16:27:48 crc kubenswrapper[5116]: I1212 16:27:48.040319 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" event={"ID":"9530b301-aff0-413f-bd3d-f28b9627d579","Type":"ContainerStarted","Data":"8ae8d5a7d418651fd80e96eac4a6df86ba82d54e8132420402a9e3301c802a27"} Dec 12 16:27:48 crc kubenswrapper[5116]: I1212 16:27:48.043267 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" event={"ID":"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e","Type":"ContainerStarted","Data":"d64031ba4603520548c8d6c106d6d9d2275f1517fa4edf704c3e6d87c6ca3f76"} Dec 12 16:27:48 crc kubenswrapper[5116]: I1212 16:27:48.056717 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cq7l4"] Dec 12 16:27:48 crc kubenswrapper[5116]: I1212 16:27:48.084335 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2d948"] Dec 12 16:27:48 crc kubenswrapper[5116]: W1212 16:27:48.115539 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c823477_9c7a_4b69_8d4e_271f230fd395.slice/crio-16a011b32b27a6165c925c069ae53bafc6d78e0f5c57cc53af97bd1607cf9168 WatchSource:0}: Error finding container 16a011b32b27a6165c925c069ae53bafc6d78e0f5c57cc53af97bd1607cf9168: Status 404 returned error can't find the container with id 16a011b32b27a6165c925c069ae53bafc6d78e0f5c57cc53af97bd1607cf9168 Dec 12 16:27:48 crc kubenswrapper[5116]: I1212 16:27:48.473926 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.066149 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-2d948" event={"ID":"8c823477-9c7a-4b69-8d4e-271f230fd395","Type":"ContainerStarted","Data":"16a011b32b27a6165c925c069ae53bafc6d78e0f5c57cc53af97bd1607cf9168"} Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.071758 5116 generic.go:358] "Generic (PLEG): container finished" podID="67605f64-4d7c-4434-a20a-746c4d62b504" containerID="95d250712c77194afa99a34e9bdeb69b90506e9a02eff8fb15e682fda4c51ab7" exitCode=0 Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.071884 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerDied","Data":"95d250712c77194afa99a34e9bdeb69b90506e9a02eff8fb15e682fda4c51ab7"} Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.075454 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.088459 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtx6q" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="registry-server" containerID="cri-o://331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576" gracePeriod=2 Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.088753 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" event={"ID":"2d588fc5-0499-4dfa-a30d-b2bdb266131c","Type":"ContainerStarted","Data":"512e9a2e3ce57c0874c7b11ebd519cc20b8a3d80f8b317ed7844ace767f72f07"} Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.088908 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kl4p" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="registry-server" containerID="cri-o://6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e" gracePeriod=2 Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.716542 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.811889 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrxr\" (UniqueName: \"kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr\") pod \"712b5f0c-8943-4bc2-950c-4e310091dd69\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.812034 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content\") pod \"712b5f0c-8943-4bc2-950c-4e310091dd69\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.812331 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities\") pod \"712b5f0c-8943-4bc2-950c-4e310091dd69\" (UID: \"712b5f0c-8943-4bc2-950c-4e310091dd69\") " Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.819213 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities" (OuterVolumeSpecName: "utilities") pod "712b5f0c-8943-4bc2-950c-4e310091dd69" (UID: "712b5f0c-8943-4bc2-950c-4e310091dd69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.836751 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr" (OuterVolumeSpecName: "kube-api-access-tgrxr") pod "712b5f0c-8943-4bc2-950c-4e310091dd69" (UID: "712b5f0c-8943-4bc2-950c-4e310091dd69"). InnerVolumeSpecName "kube-api-access-tgrxr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.879276 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "712b5f0c-8943-4bc2-950c-4e310091dd69" (UID: "712b5f0c-8943-4bc2-950c-4e310091dd69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.914362 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.914393 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgrxr\" (UniqueName: \"kubernetes.io/projected/712b5f0c-8943-4bc2-950c-4e310091dd69-kube-api-access-tgrxr\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.914401 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712b5f0c-8943-4bc2-950c-4e310091dd69-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:49 crc kubenswrapper[5116]: I1212 16:27:49.932482 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.015944 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgc8c\" (UniqueName: \"kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c\") pod \"8e50ab73-10a0-4247-b988-df4972e93e0c\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.016451 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities\") pod \"8e50ab73-10a0-4247-b988-df4972e93e0c\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.016550 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content\") pod \"8e50ab73-10a0-4247-b988-df4972e93e0c\" (UID: \"8e50ab73-10a0-4247-b988-df4972e93e0c\") " Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.018190 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities" (OuterVolumeSpecName: "utilities") pod "8e50ab73-10a0-4247-b988-df4972e93e0c" (UID: "8e50ab73-10a0-4247-b988-df4972e93e0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.024849 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c" (OuterVolumeSpecName: "kube-api-access-vgc8c") pod "8e50ab73-10a0-4247-b988-df4972e93e0c" (UID: "8e50ab73-10a0-4247-b988-df4972e93e0c"). InnerVolumeSpecName "kube-api-access-vgc8c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.063763 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e50ab73-10a0-4247-b988-df4972e93e0c" (UID: "8e50ab73-10a0-4247-b988-df4972e93e0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.112698 5116 generic.go:358] "Generic (PLEG): container finished" podID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerID="6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e" exitCode=0 Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.112952 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerDied","Data":"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e"} Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.112988 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kl4p" event={"ID":"712b5f0c-8943-4bc2-950c-4e310091dd69","Type":"ContainerDied","Data":"76219c8ae3f8257e40200bcc4cd4c017e6c86eec22853299e6245d5ccf77e7b6"} Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.113007 5116 scope.go:117] "RemoveContainer" containerID="6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.113245 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kl4p" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.117895 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.117940 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e50ab73-10a0-4247-b988-df4972e93e0c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.117958 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgc8c\" (UniqueName: \"kubernetes.io/projected/8e50ab73-10a0-4247-b988-df4972e93e0c-kube-api-access-vgc8c\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.145169 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.148473 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerID="331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576" exitCode=0 Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.148557 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerDied","Data":"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576"} Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.148617 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx6q" event={"ID":"8e50ab73-10a0-4247-b988-df4972e93e0c","Type":"ContainerDied","Data":"06bd4829fad2045398a6aaec48d70303131f6365ecfe6425cef0fbd87b434b3c"} Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.148729 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx6q" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.160936 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kl4p"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.169130 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerStarted","Data":"9978dbf18b130cd3f391a03bfbb3fa3c98f6b3c0c14eac109ca6fdb2cd68ba60"} Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.175313 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.189173 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtx6q"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.203605 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" podStartSLOduration=7.741356846 podStartE2EDuration="14.20358291s" podCreationTimestamp="2025-12-12 16:27:36 +0000 UTC" firstStartedPulling="2025-12-12 16:27:40.934823877 +0000 UTC m=+755.399035633" lastFinishedPulling="2025-12-12 16:27:47.397049941 +0000 UTC m=+761.861261697" observedRunningTime="2025-12-12 16:27:50.202370487 +0000 UTC m=+764.666582263" watchObservedRunningTime="2025-12-12 16:27:50.20358291 +0000 UTC m=+764.667794666" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.332402 5116 scope.go:117] "RemoveContainer" containerID="930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.398703 5116 scope.go:117] "RemoveContainer" containerID="f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.468715 5116 scope.go:117] "RemoveContainer" containerID="6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.469774 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e\": container with ID starting with 6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e not found: ID does not exist" containerID="6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.469841 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e"} err="failed to get container status \"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e\": rpc error: code = NotFound desc = could not find container \"6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e\": container with ID starting with 6181d7afd751500bf379ea6a980c9466fd54eadc47ab4f6d45720fdf2166890e not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.469871 5116 scope.go:117] "RemoveContainer" containerID="930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.470672 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e\": container with ID starting with 930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e not found: ID does not exist" containerID="930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.470720 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e"} err="failed to get container status \"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e\": rpc error: code = NotFound desc = could not find container \"930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e\": container with ID starting with 930cdee73148dae39755e5c1059def58c4f7e58c761d0f80d177ff2f7b663f6e not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.470745 5116 scope.go:117] "RemoveContainer" containerID="f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.471357 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617\": container with ID starting with f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617 not found: ID does not exist" containerID="f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.471386 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617"} err="failed to get container status \"f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617\": rpc error: code = NotFound desc = could not find container \"f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617\": container with ID starting with f18aaf767bae678fb338a015e2737e7022c37575a25773fd46f0b3d056aa2617 not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.471415 5116 scope.go:117] "RemoveContainer" containerID="331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.519424 5116 scope.go:117] "RemoveContainer" containerID="c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.604362 5116 scope.go:117] "RemoveContainer" containerID="552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.671098 5116 scope.go:117] "RemoveContainer" containerID="331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.672451 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576\": container with ID starting with 331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576 not found: ID does not exist" containerID="331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.672491 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576"} err="failed to get container status \"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576\": rpc error: code = NotFound desc = could not find container \"331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576\": container with ID starting with 331c9c5e937edb1e13150de5cb8e121205013ae6fae89124796cb9569cdd2576 not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.672522 5116 scope.go:117] "RemoveContainer" containerID="c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.672874 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a\": container with ID starting with c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a not found: ID does not exist" containerID="c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.672904 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a"} err="failed to get container status \"c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a\": rpc error: code = NotFound desc = could not find container \"c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a\": container with ID starting with c9824edecbc85bdf14658d951f0524667be8333f6577f23be32eee912e05562a not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.672917 5116 scope.go:117] "RemoveContainer" containerID="552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b" Dec 12 16:27:50 crc kubenswrapper[5116]: E1212 16:27:50.673605 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b\": container with ID starting with 552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b not found: ID does not exist" containerID="552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.673627 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b"} err="failed to get container status \"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b\": rpc error: code = NotFound desc = could not find container \"552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b\": container with ID starting with 552726d0f108720790c128132a52776da0235a92293fabcd15a9061cf6d9782b not found: ID does not exist" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925027 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-77f86474bc-v8cjx"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925657 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="extract-content" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925675 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="extract-content" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925686 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="extract" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925693 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="extract" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925702 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="util" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925709 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="util" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925727 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="extract-utilities" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925733 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="extract-utilities" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925740 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="extract-utilities" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925746 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="extract-utilities" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925756 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="pull" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925761 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="pull" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925772 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925780 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925793 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="extract-content" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925799 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="extract-content" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925807 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925813 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925899 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f0fbdb8-16c0-45d1-b80e-801c79936e20" containerName="extract" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925911 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.925919 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" containerName="registry-server" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.938325 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.944812 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-479tj\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.944856 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.945184 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.947199 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-77f86474bc-v8cjx"] Dec 12 16:27:50 crc kubenswrapper[5116]: I1212 16:27:50.951037 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.061825 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-apiservice-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.061927 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-webhook-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.062657 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/a39b66be-c54e-4cc6-8153-05470d194bcc-kube-api-access-f69cb\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.166923 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-apiservice-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.166977 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-webhook-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.167040 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/a39b66be-c54e-4cc6-8153-05470d194bcc-kube-api-access-f69cb\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.197686 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-webhook-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.198214 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a39b66be-c54e-4cc6-8153-05470d194bcc-apiservice-cert\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.201788 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/a39b66be-c54e-4cc6-8153-05470d194bcc-kube-api-access-f69cb\") pod \"elastic-operator-77f86474bc-v8cjx\" (UID: \"a39b66be-c54e-4cc6-8153-05470d194bcc\") " pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.238710 5116 generic.go:358] "Generic (PLEG): container finished" podID="67605f64-4d7c-4434-a20a-746c4d62b504" containerID="9978dbf18b130cd3f391a03bfbb3fa3c98f6b3c0c14eac109ca6fdb2cd68ba60" exitCode=0 Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.239028 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerDied","Data":"9978dbf18b130cd3f391a03bfbb3fa3c98f6b3c0c14eac109ca6fdb2cd68ba60"} Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.268603 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" Dec 12 16:27:51 crc kubenswrapper[5116]: I1212 16:27:51.612920 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-77f86474bc-v8cjx"] Dec 12 16:27:51 crc kubenswrapper[5116]: W1212 16:27:51.639719 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39b66be_c54e_4cc6_8153_05470d194bcc.slice/crio-1cac8998d906d9582725449ffb25e7a1373460c16c04e812a1b7014a3641b078 WatchSource:0}: Error finding container 1cac8998d906d9582725449ffb25e7a1373460c16c04e812a1b7014a3641b078: Status 404 returned error can't find the container with id 1cac8998d906d9582725449ffb25e7a1373460c16c04e812a1b7014a3641b078 Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.054795 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712b5f0c-8943-4bc2-950c-4e310091dd69" path="/var/lib/kubelet/pods/712b5f0c-8943-4bc2-950c-4e310091dd69/volumes" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.055693 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e50ab73-10a0-4247-b988-df4972e93e0c" path="/var/lib/kubelet/pods/8e50ab73-10a0-4247-b988-df4972e93e0c/volumes" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.267175 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" event={"ID":"a39b66be-c54e-4cc6-8153-05470d194bcc","Type":"ContainerStarted","Data":"1cac8998d906d9582725449ffb25e7a1373460c16c04e812a1b7014a3641b078"} Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.611261 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.711926 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4tq9\" (UniqueName: \"kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9\") pod \"67605f64-4d7c-4434-a20a-746c4d62b504\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.712039 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle\") pod \"67605f64-4d7c-4434-a20a-746c4d62b504\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.712145 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util\") pod \"67605f64-4d7c-4434-a20a-746c4d62b504\" (UID: \"67605f64-4d7c-4434-a20a-746c4d62b504\") " Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.714560 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle" (OuterVolumeSpecName: "bundle") pod "67605f64-4d7c-4434-a20a-746c4d62b504" (UID: "67605f64-4d7c-4434-a20a-746c4d62b504"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.728562 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util" (OuterVolumeSpecName: "util") pod "67605f64-4d7c-4434-a20a-746c4d62b504" (UID: "67605f64-4d7c-4434-a20a-746c4d62b504"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.733063 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9" (OuterVolumeSpecName: "kube-api-access-k4tq9") pod "67605f64-4d7c-4434-a20a-746c4d62b504" (UID: "67605f64-4d7c-4434-a20a-746c4d62b504"). InnerVolumeSpecName "kube-api-access-k4tq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.813731 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.813802 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4tq9\" (UniqueName: \"kubernetes.io/projected/67605f64-4d7c-4434-a20a-746c4d62b504-kube-api-access-k4tq9\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:52 crc kubenswrapper[5116]: I1212 16:27:52.813819 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67605f64-4d7c-4434-a20a-746c4d62b504-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:53 crc kubenswrapper[5116]: I1212 16:27:53.300737 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" event={"ID":"67605f64-4d7c-4434-a20a-746c4d62b504","Type":"ContainerDied","Data":"dbcbda270d6fdf78e96804f2303897279ae23a71a62b5e1987c7cc1833e8c878"} Dec 12 16:27:53 crc kubenswrapper[5116]: I1212 16:27:53.300802 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbcbda270d6fdf78e96804f2303897279ae23a71a62b5e1987c7cc1833e8c878" Dec 12 16:27:53 crc kubenswrapper[5116]: I1212 16:27:53.300936 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5cvj4" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.723301 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67"] Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724785 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="util" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724814 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="util" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724841 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="extract" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724847 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="extract" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724856 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="pull" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724862 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="pull" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.724961 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="67605f64-4d7c-4434-a20a-746c4d62b504" containerName="extract" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.777621 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67"] Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.777850 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.780822 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.780866 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.781685 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-dhc5j\"" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.812890 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.812933 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgdt\" (UniqueName: \"kubernetes.io/projected/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-kube-api-access-4vgdt\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.915059 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.915205 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgdt\" (UniqueName: \"kubernetes.io/projected/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-kube-api-access-4vgdt\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.915692 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:00 crc kubenswrapper[5116]: I1212 16:28:00.941504 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgdt\" (UniqueName: \"kubernetes.io/projected/c69dbda6-ffe8-4379-bdf9-ba12363dccfa-kube-api-access-4vgdt\") pod \"cert-manager-operator-controller-manager-64c74584c4-vbx67\" (UID: \"c69dbda6-ffe8-4379-bdf9-ba12363dccfa\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:01 crc kubenswrapper[5116]: I1212 16:28:01.102910 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" Dec 12 16:28:10 crc kubenswrapper[5116]: I1212 16:28:10.655416 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67"] Dec 12 16:28:10 crc kubenswrapper[5116]: W1212 16:28:10.752967 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69dbda6_ffe8_4379_bdf9_ba12363dccfa.slice/crio-60984664d940621722dc2fedcbe6b470e0d3c8f03bec10cffd66770efa6be688 WatchSource:0}: Error finding container 60984664d940621722dc2fedcbe6b470e0d3c8f03bec10cffd66770efa6be688: Status 404 returned error can't find the container with id 60984664d940621722dc2fedcbe6b470e0d3c8f03bec10cffd66770efa6be688 Dec 12 16:28:10 crc kubenswrapper[5116]: I1212 16:28:10.964139 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" podUID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" containerName="registry" containerID="cri-o://0c2caf6abc336b18b322ed6df1b8a7863ead2bd60c45aff9f520c34d4d8b569e" gracePeriod=30 Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.508945 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" event={"ID":"9530b301-aff0-413f-bd3d-f28b9627d579","Type":"ContainerStarted","Data":"cbe0520b8e8573cc976eca12b4de2a7ac72bad489824b4895dbac7956d41a453"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.511796 5116 generic.go:358] "Generic (PLEG): container finished" podID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" containerID="0c2caf6abc336b18b322ed6df1b8a7863ead2bd60c45aff9f520c34d4d8b569e" exitCode=0 Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.511926 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" event={"ID":"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6","Type":"ContainerDied","Data":"0c2caf6abc336b18b322ed6df1b8a7863ead2bd60c45aff9f520c34d4d8b569e"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.514759 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" event={"ID":"5e65cc37-804d-4ecd-ab43-1fc8d7455d6e","Type":"ContainerStarted","Data":"16a270ed83f3e43de6d3acd9ca9d27d8ce20aed3ed21cac1416dae99ced2ea02"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.516130 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" event={"ID":"c69dbda6-ffe8-4379-bdf9-ba12363dccfa","Type":"ContainerStarted","Data":"60984664d940621722dc2fedcbe6b470e0d3c8f03bec10cffd66770efa6be688"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.517864 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-2d948" event={"ID":"8c823477-9c7a-4b69-8d4e-271f230fd395","Type":"ContainerStarted","Data":"873e7ef8e7aa04e864cd5c75cd88241a71c2b569bf9bb93b663785e2c6146b0d"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.518156 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.519965 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" event={"ID":"a39b66be-c54e-4cc6-8153-05470d194bcc","Type":"ContainerStarted","Data":"cbc4463213a9df5a4952470e856710bfd57ab418afd9f006d176a6f54101e5ed"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.521238 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" event={"ID":"8f0ebfba-04ab-41e7-980f-4a98169dd9bd","Type":"ContainerStarted","Data":"e66d1fe993deab8b47b636c2aab94d01f759904c173f3cacf3a7410aad2a7aeb"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.523197 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" event={"ID":"2d588fc5-0499-4dfa-a30d-b2bdb266131c","Type":"ContainerStarted","Data":"b8ee5ea2b3aefb85239c281594037453f547317946d1d9ac3c86acee96647acc"} Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.523478 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.529912 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-vl48s" podStartSLOduration=5.864395364 podStartE2EDuration="28.529893082s" podCreationTimestamp="2025-12-12 16:27:43 +0000 UTC" firstStartedPulling="2025-12-12 16:27:47.843857123 +0000 UTC m=+762.308068879" lastFinishedPulling="2025-12-12 16:28:10.509354841 +0000 UTC m=+784.973566597" observedRunningTime="2025-12-12 16:28:11.527345604 +0000 UTC m=+785.991557370" watchObservedRunningTime="2025-12-12 16:28:11.529893082 +0000 UTC m=+785.994104848" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.596314 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" podStartSLOduration=6.242583244 podStartE2EDuration="28.596292185s" podCreationTimestamp="2025-12-12 16:27:43 +0000 UTC" firstStartedPulling="2025-12-12 16:27:48.102687813 +0000 UTC m=+762.566899569" lastFinishedPulling="2025-12-12 16:28:10.456396764 +0000 UTC m=+784.920608510" observedRunningTime="2025-12-12 16:28:11.590910789 +0000 UTC m=+786.055122545" watchObservedRunningTime="2025-12-12 16:28:11.596292185 +0000 UTC m=+786.060503941" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.639185 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-wxlr2" podStartSLOduration=4.581027592 podStartE2EDuration="28.639161168s" podCreationTimestamp="2025-12-12 16:27:43 +0000 UTC" firstStartedPulling="2025-12-12 16:27:46.427645135 +0000 UTC m=+760.891856891" lastFinishedPulling="2025-12-12 16:28:10.485778711 +0000 UTC m=+784.949990467" observedRunningTime="2025-12-12 16:28:11.632388475 +0000 UTC m=+786.096600251" watchObservedRunningTime="2025-12-12 16:28:11.639161168 +0000 UTC m=+786.103372914" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.697662 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f5f4b586f-fxsfr" podStartSLOduration=6.194218424 podStartE2EDuration="28.697644526s" podCreationTimestamp="2025-12-12 16:27:43 +0000 UTC" firstStartedPulling="2025-12-12 16:27:47.95217003 +0000 UTC m=+762.416381786" lastFinishedPulling="2025-12-12 16:28:10.455596132 +0000 UTC m=+784.919807888" observedRunningTime="2025-12-12 16:28:11.692777364 +0000 UTC m=+786.156989120" watchObservedRunningTime="2025-12-12 16:28:11.697644526 +0000 UTC m=+786.161856282" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.698135 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.705599 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-77f86474bc-v8cjx" podStartSLOduration=2.8951453799999998 podStartE2EDuration="21.705562541s" podCreationTimestamp="2025-12-12 16:27:50 +0000 UTC" firstStartedPulling="2025-12-12 16:27:51.644259076 +0000 UTC m=+766.108470832" lastFinishedPulling="2025-12-12 16:28:10.454676237 +0000 UTC m=+784.918887993" observedRunningTime="2025-12-12 16:28:11.666672575 +0000 UTC m=+786.130884331" watchObservedRunningTime="2025-12-12 16:28:11.705562541 +0000 UTC m=+786.169774297" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.723150 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-2d948" podStartSLOduration=6.304862562 podStartE2EDuration="28.723126707s" podCreationTimestamp="2025-12-12 16:27:43 +0000 UTC" firstStartedPulling="2025-12-12 16:27:48.11888403 +0000 UTC m=+762.583095786" lastFinishedPulling="2025-12-12 16:28:10.537148175 +0000 UTC m=+785.001359931" observedRunningTime="2025-12-12 16:28:11.722328896 +0000 UTC m=+786.186540662" watchObservedRunningTime="2025-12-12 16:28:11.723126707 +0000 UTC m=+786.187338463" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799280 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799402 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799454 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799497 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799532 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799602 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799670 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.799720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccbbz\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz\") pod \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\" (UID: \"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6\") " Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.803770 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.803963 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.814411 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.814549 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz" (OuterVolumeSpecName: "kube-api-access-ccbbz") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "kube-api-access-ccbbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.815338 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.827299 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.829817 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.845786 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-2d948" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.854750 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" (UID: "2ae8845e-6a5e-42ea-b73d-5a99d3d897d6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901750 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901791 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901803 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901813 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901822 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901831 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:11 crc kubenswrapper[5116]: I1212 16:28:11.901841 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccbbz\" (UniqueName: \"kubernetes.io/projected/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6-kube-api-access-ccbbz\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.267483 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.268308 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" containerName="registry" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.268329 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" containerName="registry" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.268454 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" containerName="registry" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.286188 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.290428 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.290740 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.296888 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.297090 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.297160 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.297325 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.300510 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.306829 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.307162 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-b26jv\"" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.318446 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423174 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423428 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423449 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423492 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/05664d1f-9395-41af-a2d6-8669944a9ad6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423516 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423534 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423638 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423729 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423783 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423943 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.423977 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.424060 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.424123 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.424180 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.424234 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526064 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526223 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526260 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526292 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/05664d1f-9395-41af-a2d6-8669944a9ad6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526373 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526401 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526439 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526474 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526509 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526575 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526607 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526652 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526703 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526753 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526792 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.526796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.527205 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.527753 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.527966 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.528851 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.528941 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.530055 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.531210 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.532826 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.533827 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.533894 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.536587 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.536572 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qgtsr" event={"ID":"2ae8845e-6a5e-42ea-b73d-5a99d3d897d6","Type":"ContainerDied","Data":"8161b51cde5fc591ec8869c9b43e6efcfc72ee321185aeee4ee9c8e7e0473927"} Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.536795 5116 scope.go:117] "RemoveContainer" containerID="0c2caf6abc336b18b322ed6df1b8a7863ead2bd60c45aff9f520c34d4d8b569e" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.539961 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.551480 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.555888 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/05664d1f-9395-41af-a2d6-8669944a9ad6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.558045 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/05664d1f-9395-41af-a2d6-8669944a9ad6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"05664d1f-9395-41af-a2d6-8669944a9ad6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.598926 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.604047 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qgtsr"] Dec 12 16:28:12 crc kubenswrapper[5116]: I1212 16:28:12.609538 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:13 crc kubenswrapper[5116]: I1212 16:28:13.140321 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:13 crc kubenswrapper[5116]: W1212 16:28:13.166792 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05664d1f_9395_41af_a2d6_8669944a9ad6.slice/crio-c7a58bfda9a1b30eabf5ebaa0ab33028984642a690b438b99fb503bc71160451 WatchSource:0}: Error finding container c7a58bfda9a1b30eabf5ebaa0ab33028984642a690b438b99fb503bc71160451: Status 404 returned error can't find the container with id c7a58bfda9a1b30eabf5ebaa0ab33028984642a690b438b99fb503bc71160451 Dec 12 16:28:13 crc kubenswrapper[5116]: I1212 16:28:13.551184 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"05664d1f-9395-41af-a2d6-8669944a9ad6","Type":"ContainerStarted","Data":"c7a58bfda9a1b30eabf5ebaa0ab33028984642a690b438b99fb503bc71160451"} Dec 12 16:28:14 crc kubenswrapper[5116]: I1212 16:28:14.060000 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae8845e-6a5e-42ea-b73d-5a99d3d897d6" path="/var/lib/kubelet/pods/2ae8845e-6a5e-42ea-b73d-5a99d3d897d6/volumes" Dec 12 16:28:19 crc kubenswrapper[5116]: I1212 16:28:19.415817 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:28:19 crc kubenswrapper[5116]: I1212 16:28:19.416813 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:28:19 crc kubenswrapper[5116]: I1212 16:28:19.606435 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" event={"ID":"c69dbda6-ffe8-4379-bdf9-ba12363dccfa","Type":"ContainerStarted","Data":"5bb21981dc0f09a23b6fbc8f4a009432ecb1f3dfdd074ea4146caa9b759a23d6"} Dec 12 16:28:19 crc kubenswrapper[5116]: I1212 16:28:19.633414 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vbx67" podStartSLOduration=14.181604091 podStartE2EDuration="19.633387894s" podCreationTimestamp="2025-12-12 16:28:00 +0000 UTC" firstStartedPulling="2025-12-12 16:28:10.756951952 +0000 UTC m=+785.221163708" lastFinishedPulling="2025-12-12 16:28:16.208735755 +0000 UTC m=+790.672947511" observedRunningTime="2025-12-12 16:28:19.629310673 +0000 UTC m=+794.093522429" watchObservedRunningTime="2025-12-12 16:28:19.633387894 +0000 UTC m=+794.097599650" Dec 12 16:28:22 crc kubenswrapper[5116]: I1212 16:28:22.542199 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-cq7l4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.017123 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2"] Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.029910 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2"] Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.030152 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.032906 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.036410 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.039942 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-zhf4h\"" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.074452 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4pz\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-kube-api-access-zf4pz\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.075242 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.184065 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4pz\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-kube-api-access-zf4pz\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.184178 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.213369 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.219580 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4pz\" (UniqueName: \"kubernetes.io/projected/c4a090f1-f562-4cca-b5a0-58b80b137fdd-kube-api-access-zf4pz\") pod \"cert-manager-webhook-7894b5b9b4-lfbl2\" (UID: \"c4a090f1-f562-4cca-b5a0-58b80b137fdd\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.230433 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4"] Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.238729 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.241396 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-jddtr\"" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.255350 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4"] Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.288096 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.288971 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cl4z\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-kube-api-access-9cl4z\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.380598 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.390177 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.390247 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9cl4z\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-kube-api-access-9cl4z\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.413183 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cl4z\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-kube-api-access-9cl4z\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.437332 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84696c12-24e8-4763-86da-29848fbc94e7-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dr8r4\" (UID: \"84696c12-24e8-4763-86da-29848fbc94e7\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:25 crc kubenswrapper[5116]: I1212 16:28:25.581981 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" Dec 12 16:28:31 crc kubenswrapper[5116]: I1212 16:28:31.368498 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-c4wrf"] Dec 12 16:28:40 crc kubenswrapper[5116]: I1212 16:28:40.677442 5116 patch_prober.go:28] interesting pod/authentication-operator-7f5c659b84-xzscf container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:28:40 crc kubenswrapper[5116]: I1212 16:28:40.678015 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-xzscf" podUID="753e4cbf-dd62-4448-ab39-6f28a23c7ca2" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.128054 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-c4wrf"] Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.128393 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-4r9mm"] Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.128887 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.132766 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-crph2\"" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.254259 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlks\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-kube-api-access-ghlks\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.256786 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-bound-sa-token\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.360035 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-bound-sa-token\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.360955 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlks\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-kube-api-access-ghlks\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.398272 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-bound-sa-token\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.401909 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlks\" (UniqueName: \"kubernetes.io/projected/4cc493a8-d453-41d1-966e-71d2dc866012-kube-api-access-ghlks\") pod \"cert-manager-858d87f86b-c4wrf\" (UID: \"4cc493a8-d453-41d1-966e-71d2dc866012\") " pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:46 crc kubenswrapper[5116]: I1212 16:28:46.465436 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-c4wrf" Dec 12 16:28:47 crc kubenswrapper[5116]: I1212 16:28:47.956160 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:28:47 crc kubenswrapper[5116]: I1212 16:28:47.960434 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-pf562\"" Dec 12 16:28:47 crc kubenswrapper[5116]: I1212 16:28:47.973513 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4r9mm"] Dec 12 16:28:48 crc kubenswrapper[5116]: I1212 16:28:48.089615 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rzk\" (UniqueName: \"kubernetes.io/projected/267737e4-3ae9-430f-8ba6-cee92ddb2e57-kube-api-access-x9rzk\") pod \"infrawatch-operators-4r9mm\" (UID: \"267737e4-3ae9-430f-8ba6-cee92ddb2e57\") " pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:28:48 crc kubenswrapper[5116]: I1212 16:28:48.192000 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rzk\" (UniqueName: \"kubernetes.io/projected/267737e4-3ae9-430f-8ba6-cee92ddb2e57-kube-api-access-x9rzk\") pod \"infrawatch-operators-4r9mm\" (UID: \"267737e4-3ae9-430f-8ba6-cee92ddb2e57\") " pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:28:48 crc kubenswrapper[5116]: I1212 16:28:48.224402 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rzk\" (UniqueName: \"kubernetes.io/projected/267737e4-3ae9-430f-8ba6-cee92ddb2e57-kube-api-access-x9rzk\") pod \"infrawatch-operators-4r9mm\" (UID: \"267737e4-3ae9-430f-8ba6-cee92ddb2e57\") " pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:28:48 crc kubenswrapper[5116]: I1212 16:28:48.277686 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:28:49 crc kubenswrapper[5116]: I1212 16:28:49.415771 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:28:49 crc kubenswrapper[5116]: I1212 16:28:49.415870 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:28:50 crc kubenswrapper[5116]: I1212 16:28:50.841663 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4"] Dec 12 16:28:50 crc kubenswrapper[5116]: I1212 16:28:50.848758 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2"] Dec 12 16:28:50 crc kubenswrapper[5116]: I1212 16:28:50.917126 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-c4wrf"] Dec 12 16:28:50 crc kubenswrapper[5116]: I1212 16:28:50.921567 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4r9mm"] Dec 12 16:28:52 crc kubenswrapper[5116]: I1212 16:28:52.889750 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" event={"ID":"84696c12-24e8-4763-86da-29848fbc94e7","Type":"ContainerStarted","Data":"5f4afb87ec379d24b744ef9e13a8a8a5e1732aba2e6b87db9b400c78bbe67c41"} Dec 12 16:28:52 crc kubenswrapper[5116]: I1212 16:28:52.892891 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-c4wrf" event={"ID":"4cc493a8-d453-41d1-966e-71d2dc866012","Type":"ContainerStarted","Data":"d73fdcfc68510352d90da9e640b9d278ae913e54dcec95e62099b394d3de93c4"} Dec 12 16:28:52 crc kubenswrapper[5116]: I1212 16:28:52.894985 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"05664d1f-9395-41af-a2d6-8669944a9ad6","Type":"ContainerStarted","Data":"6db896924d6994864b9f50a30543b9e619f3c151f1b61115109fce303d9e7f41"} Dec 12 16:28:52 crc kubenswrapper[5116]: I1212 16:28:52.897838 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4r9mm" event={"ID":"267737e4-3ae9-430f-8ba6-cee92ddb2e57","Type":"ContainerStarted","Data":"4e1baa5b13b37f69201ce292fff216f08702a3dd4ca16cd1c9d91e57c05542ce"} Dec 12 16:28:52 crc kubenswrapper[5116]: I1212 16:28:52.898976 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" event={"ID":"c4a090f1-f562-4cca-b5a0-58b80b137fdd","Type":"ContainerStarted","Data":"2e701edd9c2af2c76b19fa7e784cd0bceccadc46bb379b54342aa28e1ccecdd5"} Dec 12 16:28:53 crc kubenswrapper[5116]: I1212 16:28:53.031176 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:53 crc kubenswrapper[5116]: I1212 16:28:53.058598 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:54 crc kubenswrapper[5116]: I1212 16:28:54.916277 5116 generic.go:358] "Generic (PLEG): container finished" podID="05664d1f-9395-41af-a2d6-8669944a9ad6" containerID="6db896924d6994864b9f50a30543b9e619f3c151f1b61115109fce303d9e7f41" exitCode=0 Dec 12 16:28:54 crc kubenswrapper[5116]: I1212 16:28:54.916470 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"05664d1f-9395-41af-a2d6-8669944a9ad6","Type":"ContainerDied","Data":"6db896924d6994864b9f50a30543b9e619f3c151f1b61115109fce303d9e7f41"} Dec 12 16:28:55 crc kubenswrapper[5116]: I1212 16:28:55.935221 5116 generic.go:358] "Generic (PLEG): container finished" podID="05664d1f-9395-41af-a2d6-8669944a9ad6" containerID="b7e68cf6ea65a03d407ea5e36efacd3b91987aa2fbdd45b5a8432859e7bf451d" exitCode=0 Dec 12 16:28:55 crc kubenswrapper[5116]: I1212 16:28:55.935365 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"05664d1f-9395-41af-a2d6-8669944a9ad6","Type":"ContainerDied","Data":"b7e68cf6ea65a03d407ea5e36efacd3b91987aa2fbdd45b5a8432859e7bf451d"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.063891 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-c4wrf" event={"ID":"4cc493a8-d453-41d1-966e-71d2dc866012","Type":"ContainerStarted","Data":"b25f5ae828e2120fb9f274ebb4870baa5629b131162e5f15a41f7d84aa20fecd"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.067731 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"05664d1f-9395-41af-a2d6-8669944a9ad6","Type":"ContainerStarted","Data":"846deeff8245d27726948e76e6fe82d0df7c49a76a6a0ac15255d9a370ee7bac"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.068457 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.070099 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4r9mm" event={"ID":"267737e4-3ae9-430f-8ba6-cee92ddb2e57","Type":"ContainerStarted","Data":"a41733565728635f31a197b1858933ed99806d54c197f800e8678985a96e9681"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.072386 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" event={"ID":"c4a090f1-f562-4cca-b5a0-58b80b137fdd","Type":"ContainerStarted","Data":"e7ae52b91bb8c08e7ca1a6dddfad1b2fbe1bc3886e7bc11c8afe14117e1e78c7"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.072425 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.074507 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" event={"ID":"84696c12-24e8-4763-86da-29848fbc94e7","Type":"ContainerStarted","Data":"d1cb9e52834163d0d1da679e6c0ea359128c66ec5c6af9151eb4fb9a301fccc4"} Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.094738 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-c4wrf" podStartSLOduration=21.694176033 podStartE2EDuration="42.094710124s" podCreationTimestamp="2025-12-12 16:28:31 +0000 UTC" firstStartedPulling="2025-12-12 16:28:51.942055127 +0000 UTC m=+826.406266893" lastFinishedPulling="2025-12-12 16:29:12.342589228 +0000 UTC m=+846.806800984" observedRunningTime="2025-12-12 16:29:13.081567677 +0000 UTC m=+847.545779453" watchObservedRunningTime="2025-12-12 16:29:13.094710124 +0000 UTC m=+847.558921890" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.126205 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" podStartSLOduration=28.712916722 podStartE2EDuration="49.126182888s" podCreationTimestamp="2025-12-12 16:28:24 +0000 UTC" firstStartedPulling="2025-12-12 16:28:51.941892383 +0000 UTC m=+826.406104159" lastFinishedPulling="2025-12-12 16:29:12.355158559 +0000 UTC m=+846.819370325" observedRunningTime="2025-12-12 16:29:13.117644746 +0000 UTC m=+847.581856512" watchObservedRunningTime="2025-12-12 16:29:13.126182888 +0000 UTC m=+847.590394654" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.139723 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-4r9mm" podStartSLOduration=7.8713627729999995 podStartE2EDuration="28.139697475s" podCreationTimestamp="2025-12-12 16:28:45 +0000 UTC" firstStartedPulling="2025-12-12 16:28:51.940555057 +0000 UTC m=+826.404766813" lastFinishedPulling="2025-12-12 16:29:12.208889759 +0000 UTC m=+846.673101515" observedRunningTime="2025-12-12 16:29:13.139269863 +0000 UTC m=+847.603481619" watchObservedRunningTime="2025-12-12 16:29:13.139697475 +0000 UTC m=+847.603909231" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.213148 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=22.350823812 podStartE2EDuration="1m1.213121128s" podCreationTimestamp="2025-12-12 16:28:12 +0000 UTC" firstStartedPulling="2025-12-12 16:28:13.170285739 +0000 UTC m=+787.634497495" lastFinishedPulling="2025-12-12 16:28:52.032583055 +0000 UTC m=+826.496794811" observedRunningTime="2025-12-12 16:29:13.178319333 +0000 UTC m=+847.642531109" watchObservedRunningTime="2025-12-12 16:29:13.213121128 +0000 UTC m=+847.677332884" Dec 12 16:29:13 crc kubenswrapper[5116]: I1212 16:29:13.227401 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dr8r4" podStartSLOduration=27.860393494 podStartE2EDuration="48.227369724s" podCreationTimestamp="2025-12-12 16:28:25 +0000 UTC" firstStartedPulling="2025-12-12 16:28:51.941864092 +0000 UTC m=+826.406075848" lastFinishedPulling="2025-12-12 16:29:12.308840312 +0000 UTC m=+846.773052078" observedRunningTime="2025-12-12 16:29:13.211512754 +0000 UTC m=+847.675724510" watchObservedRunningTime="2025-12-12 16:29:13.227369724 +0000 UTC m=+847.691581480" Dec 12 16:29:18 crc kubenswrapper[5116]: I1212 16:29:18.278227 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:29:18 crc kubenswrapper[5116]: I1212 16:29:18.278618 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:29:18 crc kubenswrapper[5116]: I1212 16:29:18.315926 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.085041 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-lfbl2" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.162693 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-4r9mm" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.415859 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.415989 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.416188 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.417074 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:29:19 crc kubenswrapper[5116]: I1212 16:29:19.417175 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb" gracePeriod=600 Dec 12 16:29:20 crc kubenswrapper[5116]: I1212 16:29:20.144378 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb" exitCode=0 Dec 12 16:29:20 crc kubenswrapper[5116]: I1212 16:29:20.144470 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb"} Dec 12 16:29:20 crc kubenswrapper[5116]: I1212 16:29:20.144840 5116 scope.go:117] "RemoveContainer" containerID="f5b44f9ffb3248b33fd2a8f37604c18d69f889be1bf25780c931629ed9dec483" Dec 12 16:29:21 crc kubenswrapper[5116]: I1212 16:29:21.154152 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5"} Dec 12 16:29:25 crc kubenswrapper[5116]: I1212 16:29:25.184473 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="05664d1f-9395-41af-a2d6-8669944a9ad6" containerName="elasticsearch" probeResult="failure" output=< Dec 12 16:29:25 crc kubenswrapper[5116]: {"timestamp": "2025-12-12T16:29:25+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 16:29:25 crc kubenswrapper[5116]: > Dec 12 16:29:30 crc kubenswrapper[5116]: I1212 16:29:30.878229 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:29:35 crc kubenswrapper[5116]: I1212 16:29:35.008273 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666"] Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.812213 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.817807 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.823810 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666"] Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.824246 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68"] Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.879892 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.879980 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.880081 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r42ql\" (UniqueName: \"kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.981683 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.982195 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r42ql\" (UniqueName: \"kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.982280 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.982392 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.982749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.990652 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68"] Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.990888 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65"] Dec 12 16:29:37 crc kubenswrapper[5116]: I1212 16:29:37.990920 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.012702 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r42ql\" (UniqueName: \"kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.084099 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.084312 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.084650 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq89v\" (UniqueName: \"kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.085405 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.096558 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65"] Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.137272 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.185879 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.185955 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.185993 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.186084 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6gc\" (UniqueName: \"kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.186150 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wq89v\" (UniqueName: \"kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.186198 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.186572 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.186685 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.211877 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq89v\" (UniqueName: \"kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.288761 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.289316 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.289368 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv6gc\" (UniqueName: \"kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.289690 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.289746 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.325558 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv6gc\" (UniqueName: \"kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.335450 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.410680 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.598164 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68"] Dec 12 16:29:38 crc kubenswrapper[5116]: W1212 16:29:38.613267 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e466b07_58a6_4cf5_927c_7b65ccca28ee.slice/crio-12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3 WatchSource:0}: Error finding container 12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3: Status 404 returned error can't find the container with id 12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3 Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.681003 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666"] Dec 12 16:29:38 crc kubenswrapper[5116]: I1212 16:29:38.900164 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65"] Dec 12 16:29:38 crc kubenswrapper[5116]: W1212 16:29:38.921720 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e00f74_270a_4c41_a1bc_9abf223dda2c.slice/crio-edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e WatchSource:0}: Error finding container edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e: Status 404 returned error can't find the container with id edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.289301 5116 generic.go:358] "Generic (PLEG): container finished" podID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerID="5034768e73b94d6bcb6d6662848e5010b53a4ac6f8a4eb2011a6241fe9945b50" exitCode=0 Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.289392 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" event={"ID":"29b4716b-2044-43b6-8ae7-3e379ae29291","Type":"ContainerDied","Data":"5034768e73b94d6bcb6d6662848e5010b53a4ac6f8a4eb2011a6241fe9945b50"} Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.289443 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" event={"ID":"29b4716b-2044-43b6-8ae7-3e379ae29291","Type":"ContainerStarted","Data":"fd9724dc4ddd9dd30ab986e6814bd2b6ead5aa5cd78ac358856124a9ee49e353"} Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.292259 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerID="86b08e3ac19457a018328d61d41fdf2ba5511451e4173dad357b96149b4ed9ed" exitCode=0 Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.292357 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" event={"ID":"8e466b07-58a6-4cf5-927c-7b65ccca28ee","Type":"ContainerDied","Data":"86b08e3ac19457a018328d61d41fdf2ba5511451e4173dad357b96149b4ed9ed"} Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.292842 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" event={"ID":"8e466b07-58a6-4cf5-927c-7b65ccca28ee","Type":"ContainerStarted","Data":"12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3"} Dec 12 16:29:39 crc kubenswrapper[5116]: I1212 16:29:39.295749 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerStarted","Data":"edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e"} Dec 12 16:29:41 crc kubenswrapper[5116]: I1212 16:29:41.311622 5116 generic.go:358] "Generic (PLEG): container finished" podID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerID="f78bd5d5d32d7981e01d51671f3c7c40c9766e8a4d038878fba2f83dd166eb58" exitCode=0 Dec 12 16:29:41 crc kubenswrapper[5116]: I1212 16:29:41.311730 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerDied","Data":"f78bd5d5d32d7981e01d51671f3c7c40c9766e8a4d038878fba2f83dd166eb58"} Dec 12 16:29:42 crc kubenswrapper[5116]: I1212 16:29:42.321344 5116 generic.go:358] "Generic (PLEG): container finished" podID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerID="4c57225f787f0b887767beff6cca38ede5475511c6b004b62733ce74610ed7f3" exitCode=0 Dec 12 16:29:42 crc kubenswrapper[5116]: I1212 16:29:42.321454 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" event={"ID":"29b4716b-2044-43b6-8ae7-3e379ae29291","Type":"ContainerDied","Data":"4c57225f787f0b887767beff6cca38ede5475511c6b004b62733ce74610ed7f3"} Dec 12 16:29:42 crc kubenswrapper[5116]: I1212 16:29:42.330332 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerID="b53a10187233795abab63ee7518b43111c2ad89bdee660dccefd8e020b6b5f6d" exitCode=0 Dec 12 16:29:42 crc kubenswrapper[5116]: I1212 16:29:42.331236 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" event={"ID":"8e466b07-58a6-4cf5-927c-7b65ccca28ee","Type":"ContainerDied","Data":"b53a10187233795abab63ee7518b43111c2ad89bdee660dccefd8e020b6b5f6d"} Dec 12 16:29:43 crc kubenswrapper[5116]: I1212 16:29:43.339805 5116 generic.go:358] "Generic (PLEG): container finished" podID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerID="1508b6d44c04c5c3fef38055d6e9a686b421b1f0009cd5687d5946332b36176e" exitCode=0 Dec 12 16:29:43 crc kubenswrapper[5116]: I1212 16:29:43.339880 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" event={"ID":"8e466b07-58a6-4cf5-927c-7b65ccca28ee","Type":"ContainerDied","Data":"1508b6d44c04c5c3fef38055d6e9a686b421b1f0009cd5687d5946332b36176e"} Dec 12 16:29:43 crc kubenswrapper[5116]: I1212 16:29:43.343483 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerStarted","Data":"fd210051408d7edb1d0b9113ef470645113ad095d645e860f7d206846b04f745"} Dec 12 16:29:43 crc kubenswrapper[5116]: I1212 16:29:43.346279 5116 generic.go:358] "Generic (PLEG): container finished" podID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerID="cf0ee74ebcf5d08c69bbe0a43d6a9a6bee1c08b14ae37d7440680dc22c018389" exitCode=0 Dec 12 16:29:43 crc kubenswrapper[5116]: I1212 16:29:43.346362 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" event={"ID":"29b4716b-2044-43b6-8ae7-3e379ae29291","Type":"ContainerDied","Data":"cf0ee74ebcf5d08c69bbe0a43d6a9a6bee1c08b14ae37d7440680dc22c018389"} Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.356944 5116 generic.go:358] "Generic (PLEG): container finished" podID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerID="fd210051408d7edb1d0b9113ef470645113ad095d645e860f7d206846b04f745" exitCode=0 Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.357026 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerDied","Data":"fd210051408d7edb1d0b9113ef470645113ad095d645e860f7d206846b04f745"} Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.665758 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.724636 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799512 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq89v\" (UniqueName: \"kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v\") pod \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799679 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle\") pod \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799789 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r42ql\" (UniqueName: \"kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql\") pod \"29b4716b-2044-43b6-8ae7-3e379ae29291\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799837 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util\") pod \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\" (UID: \"8e466b07-58a6-4cf5-927c-7b65ccca28ee\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799908 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util\") pod \"29b4716b-2044-43b6-8ae7-3e379ae29291\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.799964 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle\") pod \"29b4716b-2044-43b6-8ae7-3e379ae29291\" (UID: \"29b4716b-2044-43b6-8ae7-3e379ae29291\") " Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.800598 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle" (OuterVolumeSpecName: "bundle") pod "29b4716b-2044-43b6-8ae7-3e379ae29291" (UID: "29b4716b-2044-43b6-8ae7-3e379ae29291"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.801085 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle" (OuterVolumeSpecName: "bundle") pod "8e466b07-58a6-4cf5-927c-7b65ccca28ee" (UID: "8e466b07-58a6-4cf5-927c-7b65ccca28ee"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.805392 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql" (OuterVolumeSpecName: "kube-api-access-r42ql") pod "29b4716b-2044-43b6-8ae7-3e379ae29291" (UID: "29b4716b-2044-43b6-8ae7-3e379ae29291"). InnerVolumeSpecName "kube-api-access-r42ql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.805472 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v" (OuterVolumeSpecName: "kube-api-access-wq89v") pod "8e466b07-58a6-4cf5-927c-7b65ccca28ee" (UID: "8e466b07-58a6-4cf5-927c-7b65ccca28ee"). InnerVolumeSpecName "kube-api-access-wq89v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.879062 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util" (OuterVolumeSpecName: "util") pod "29b4716b-2044-43b6-8ae7-3e379ae29291" (UID: "29b4716b-2044-43b6-8ae7-3e379ae29291"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.901135 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.901178 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29b4716b-2044-43b6-8ae7-3e379ae29291-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.901191 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wq89v\" (UniqueName: \"kubernetes.io/projected/8e466b07-58a6-4cf5-927c-7b65ccca28ee-kube-api-access-wq89v\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.901205 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:44 crc kubenswrapper[5116]: I1212 16:29:44.901217 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r42ql\" (UniqueName: \"kubernetes.io/projected/29b4716b-2044-43b6-8ae7-3e379ae29291-kube-api-access-r42ql\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.071635 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util" (OuterVolumeSpecName: "util") pod "8e466b07-58a6-4cf5-927c-7b65ccca28ee" (UID: "8e466b07-58a6-4cf5-927c-7b65ccca28ee"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.104491 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e466b07-58a6-4cf5-927c-7b65ccca28ee-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.367673 5116 generic.go:358] "Generic (PLEG): container finished" podID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerID="770f5d6e255b148fdeb1e9af122b28567558fba8ed9b89d686e3a24b9e14df19" exitCode=0 Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.367770 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerDied","Data":"770f5d6e255b148fdeb1e9af122b28567558fba8ed9b89d686e3a24b9e14df19"} Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.371721 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.371720 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fw8666" event={"ID":"29b4716b-2044-43b6-8ae7-3e379ae29291","Type":"ContainerDied","Data":"fd9724dc4ddd9dd30ab986e6814bd2b6ead5aa5cd78ac358856124a9ee49e353"} Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.371781 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9724dc4ddd9dd30ab986e6814bd2b6ead5aa5cd78ac358856124a9ee49e353" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.374183 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" event={"ID":"8e466b07-58a6-4cf5-927c-7b65ccca28ee","Type":"ContainerDied","Data":"12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3"} Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.374241 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b40464976cbe0f74a4ea01765725568cecaf3a4b6b9783f22c2f30174e0dc3" Dec 12 16:29:45 crc kubenswrapper[5116]: I1212 16:29:45.374392 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788sfp68" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.631276 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.730382 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv6gc\" (UniqueName: \"kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc\") pod \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.730492 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util\") pod \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.730647 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle\") pod \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\" (UID: \"c1e00f74-270a-4c41-a1bc-9abf223dda2c\") " Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.734251 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle" (OuterVolumeSpecName: "bundle") pod "c1e00f74-270a-4c41-a1bc-9abf223dda2c" (UID: "c1e00f74-270a-4c41-a1bc-9abf223dda2c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.738261 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc" (OuterVolumeSpecName: "kube-api-access-wv6gc") pod "c1e00f74-270a-4c41-a1bc-9abf223dda2c" (UID: "c1e00f74-270a-4c41-a1bc-9abf223dda2c"). InnerVolumeSpecName "kube-api-access-wv6gc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.742452 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util" (OuterVolumeSpecName: "util") pod "c1e00f74-270a-4c41-a1bc-9abf223dda2c" (UID: "c1e00f74-270a-4c41-a1bc-9abf223dda2c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.832536 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.832905 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wv6gc\" (UniqueName: \"kubernetes.io/projected/c1e00f74-270a-4c41-a1bc-9abf223dda2c-kube-api-access-wv6gc\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:46 crc kubenswrapper[5116]: I1212 16:29:46.832997 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1e00f74-270a-4c41-a1bc-9abf223dda2c-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:47 crc kubenswrapper[5116]: I1212 16:29:47.391818 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" event={"ID":"c1e00f74-270a-4c41-a1bc-9abf223dda2c","Type":"ContainerDied","Data":"edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e"} Dec 12 16:29:47 crc kubenswrapper[5116]: I1212 16:29:47.392188 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb579869bf448b78849fd4e2ae2c5b45be5b1aabe052734ee9fa72839e8306e" Dec 12 16:29:47 crc kubenswrapper[5116]: I1212 16:29:47.392209 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e747dwk65" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.053263 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-gck5p"] Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054153 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054167 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054176 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054181 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054194 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054199 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054210 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054215 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="util" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054226 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054232 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054247 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054254 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054266 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054273 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054281 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054288 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054300 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054307 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="pull" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054445 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1e00f74-270a-4c41-a1bc-9abf223dda2c" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054455 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="29b4716b-2044-43b6-8ae7-3e379ae29291" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.054464 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e466b07-58a6-4cf5-927c-7b65ccca28ee" containerName="extract" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.058431 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.060474 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-fbvq7\"" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.069259 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-gck5p"] Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.140216 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7rr\" (UniqueName: \"kubernetes.io/projected/e3252827-8e47-4a19-a79f-2afa3571f07d-kube-api-access-ng7rr\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.140280 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e3252827-8e47-4a19-a79f-2afa3571f07d-runner\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.241790 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ng7rr\" (UniqueName: \"kubernetes.io/projected/e3252827-8e47-4a19-a79f-2afa3571f07d-kube-api-access-ng7rr\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.242382 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e3252827-8e47-4a19-a79f-2afa3571f07d-runner\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.242876 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e3252827-8e47-4a19-a79f-2afa3571f07d-runner\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.263726 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng7rr\" (UniqueName: \"kubernetes.io/projected/e3252827-8e47-4a19-a79f-2afa3571f07d-kube-api-access-ng7rr\") pod \"smart-gateway-operator-5766884c8f-gck5p\" (UID: \"e3252827-8e47-4a19-a79f-2afa3571f07d\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.379742 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" Dec 12 16:29:54 crc kubenswrapper[5116]: I1212 16:29:54.813256 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-gck5p"] Dec 12 16:29:55 crc kubenswrapper[5116]: I1212 16:29:55.453154 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" event={"ID":"e3252827-8e47-4a19-a79f-2afa3571f07d","Type":"ContainerStarted","Data":"5b8625a948bf4161d6ef777f4c5e7468c76133472045897736b438acac900981"} Dec 12 16:29:55 crc kubenswrapper[5116]: I1212 16:29:55.459911 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-swxvt"] Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.074921 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.078970 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-jd69n\"" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.093346 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-swxvt"] Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.174530 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-runner\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.175260 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8llj\" (UniqueName: \"kubernetes.io/projected/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-kube-api-access-w8llj\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.276996 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-runner\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.277162 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8llj\" (UniqueName: \"kubernetes.io/projected/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-kube-api-access-w8llj\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.277652 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-runner\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.297628 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8llj\" (UniqueName: \"kubernetes.io/projected/07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9-kube-api-access-w8llj\") pod \"service-telemetry-operator-ccf9cd448-swxvt\" (UID: \"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.399429 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.681142 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-swxvt"] Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.810526 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-44mb8"] Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.824025 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.828250 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-h5b2d\"" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.835715 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-44mb8"] Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.894356 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nj2g\" (UniqueName: \"kubernetes.io/projected/47b23bb9-f19b-4efd-af91-774f2c5200be-kube-api-access-9nj2g\") pod \"interconnect-operator-78b9bd8798-44mb8\" (UID: \"47b23bb9-f19b-4efd-af91-774f2c5200be\") " pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" Dec 12 16:29:56 crc kubenswrapper[5116]: I1212 16:29:56.996640 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nj2g\" (UniqueName: \"kubernetes.io/projected/47b23bb9-f19b-4efd-af91-774f2c5200be-kube-api-access-9nj2g\") pod \"interconnect-operator-78b9bd8798-44mb8\" (UID: \"47b23bb9-f19b-4efd-af91-774f2c5200be\") " pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" Dec 12 16:29:57 crc kubenswrapper[5116]: I1212 16:29:57.019258 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nj2g\" (UniqueName: \"kubernetes.io/projected/47b23bb9-f19b-4efd-af91-774f2c5200be-kube-api-access-9nj2g\") pod \"interconnect-operator-78b9bd8798-44mb8\" (UID: \"47b23bb9-f19b-4efd-af91-774f2c5200be\") " pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" Dec 12 16:29:57 crc kubenswrapper[5116]: I1212 16:29:57.147272 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" Dec 12 16:29:57 crc kubenswrapper[5116]: I1212 16:29:57.474176 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" event={"ID":"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9","Type":"ContainerStarted","Data":"5810603fdc4f9ef3ba3e7c8c3bbcce9b4e44df5aaaa2cd3f989f9448c8772d34"} Dec 12 16:29:57 crc kubenswrapper[5116]: I1212 16:29:57.545886 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-44mb8"] Dec 12 16:29:57 crc kubenswrapper[5116]: W1212 16:29:57.552337 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47b23bb9_f19b_4efd_af91_774f2c5200be.slice/crio-628ad4932e1bdc650008b38c3792196fecec1fdc6b0da9e0a9ef1e68f6931843 WatchSource:0}: Error finding container 628ad4932e1bdc650008b38c3792196fecec1fdc6b0da9e0a9ef1e68f6931843: Status 404 returned error can't find the container with id 628ad4932e1bdc650008b38c3792196fecec1fdc6b0da9e0a9ef1e68f6931843 Dec 12 16:29:58 crc kubenswrapper[5116]: I1212 16:29:58.487707 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" event={"ID":"47b23bb9-f19b-4efd-af91-774f2c5200be","Type":"ContainerStarted","Data":"628ad4932e1bdc650008b38c3792196fecec1fdc6b0da9e0a9ef1e68f6931843"} Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.178925 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz"] Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.223532 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz"] Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.223799 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.233624 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.233849 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.297710 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pzsz\" (UniqueName: \"kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.297829 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.297889 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.399502 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.399593 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.399671 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8pzsz\" (UniqueName: \"kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.400862 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.406933 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.423004 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pzsz\" (UniqueName: \"kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz\") pod \"collect-profiles-29425950-6d5hz\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:00 crc kubenswrapper[5116]: I1212 16:30:00.559179 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:01 crc kubenswrapper[5116]: I1212 16:30:01.126232 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz"] Dec 12 16:30:01 crc kubenswrapper[5116]: I1212 16:30:01.539083 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" event={"ID":"1251e511-2ae5-4caf-9d80-e00363cfe0ef","Type":"ContainerStarted","Data":"2fb35d9e2a6644f56f16345e0507ab417161ded3ca2c4d914b084cf8fef66712"} Dec 12 16:30:08 crc kubenswrapper[5116]: I1212 16:30:08.641661 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" event={"ID":"1251e511-2ae5-4caf-9d80-e00363cfe0ef","Type":"ContainerStarted","Data":"a5366d12063e69e4b50d4e61143ff68eca56b3626ab52ac10d84695683c34fb5"} Dec 12 16:30:08 crc kubenswrapper[5116]: I1212 16:30:08.663949 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" podStartSLOduration=8.663925365 podStartE2EDuration="8.663925365s" podCreationTimestamp="2025-12-12 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:30:08.658389106 +0000 UTC m=+903.122600872" watchObservedRunningTime="2025-12-12 16:30:08.663925365 +0000 UTC m=+903.128137121" Dec 12 16:30:09 crc kubenswrapper[5116]: I1212 16:30:09.651880 5116 generic.go:358] "Generic (PLEG): container finished" podID="1251e511-2ae5-4caf-9d80-e00363cfe0ef" containerID="a5366d12063e69e4b50d4e61143ff68eca56b3626ab52ac10d84695683c34fb5" exitCode=0 Dec 12 16:30:09 crc kubenswrapper[5116]: I1212 16:30:09.652135 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" event={"ID":"1251e511-2ae5-4caf-9d80-e00363cfe0ef","Type":"ContainerDied","Data":"a5366d12063e69e4b50d4e61143ff68eca56b3626ab52ac10d84695683c34fb5"} Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.335751 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.342214 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.431805 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.461660 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.491036 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume\") pod \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.491272 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pzsz\" (UniqueName: \"kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz\") pod \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.491312 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume\") pod \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\" (UID: \"1251e511-2ae5-4caf-9d80-e00363cfe0ef\") " Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.492079 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "1251e511-2ae5-4caf-9d80-e00363cfe0ef" (UID: "1251e511-2ae5-4caf-9d80-e00363cfe0ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.502178 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz" (OuterVolumeSpecName: "kube-api-access-8pzsz") pod "1251e511-2ae5-4caf-9d80-e00363cfe0ef" (UID: "1251e511-2ae5-4caf-9d80-e00363cfe0ef"). InnerVolumeSpecName "kube-api-access-8pzsz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.520640 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1251e511-2ae5-4caf-9d80-e00363cfe0ef" (UID: "1251e511-2ae5-4caf-9d80-e00363cfe0ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.592492 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1251e511-2ae5-4caf-9d80-e00363cfe0ef-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.592547 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pzsz\" (UniqueName: \"kubernetes.io/projected/1251e511-2ae5-4caf-9d80-e00363cfe0ef-kube-api-access-8pzsz\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.592563 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251e511-2ae5-4caf-9d80-e00363cfe0ef-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.719374 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.719403 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-6d5hz" event={"ID":"1251e511-2ae5-4caf-9d80-e00363cfe0ef","Type":"ContainerDied","Data":"2fb35d9e2a6644f56f16345e0507ab417161ded3ca2c4d914b084cf8fef66712"} Dec 12 16:30:18 crc kubenswrapper[5116]: I1212 16:30:18.719496 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb35d9e2a6644f56f16345e0507ab417161ded3ca2c4d914b084cf8fef66712" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.354972 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.384909 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.402998 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.904164 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" event={"ID":"e3252827-8e47-4a19-a79f-2afa3571f07d","Type":"ContainerStarted","Data":"75b41379fc9230368099a6c8e282970bd697ded35a75c8c6dcfa2a3607259ffa"} Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.906653 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" event={"ID":"07b0c3ba-9cbc-47ec-ae5c-6bffed7a98e9","Type":"ContainerStarted","Data":"da902aa4f0d7da786c144a93f02a4b54eeb306bd6b8a707184b5a332cf4d2a48"} Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.909295 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" event={"ID":"47b23bb9-f19b-4efd-af91-774f2c5200be","Type":"ContainerStarted","Data":"27e4982301d8575774e66492369d34ca4c1acb0e61bc3ab21845e4f5ef9d11f7"} Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.930552 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5766884c8f-gck5p" podStartSLOduration=1.332948107 podStartE2EDuration="43.929198317s" podCreationTimestamp="2025-12-12 16:29:54 +0000 UTC" firstStartedPulling="2025-12-12 16:29:54.824516717 +0000 UTC m=+889.288728473" lastFinishedPulling="2025-12-12 16:30:37.420766927 +0000 UTC m=+931.884978683" observedRunningTime="2025-12-12 16:30:37.922957899 +0000 UTC m=+932.387169665" watchObservedRunningTime="2025-12-12 16:30:37.929198317 +0000 UTC m=+932.393410093" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.949339 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-ccf9cd448-swxvt" podStartSLOduration=2.093476076 podStartE2EDuration="42.949305619s" podCreationTimestamp="2025-12-12 16:29:55 +0000 UTC" firstStartedPulling="2025-12-12 16:29:56.693391625 +0000 UTC m=+891.157603391" lastFinishedPulling="2025-12-12 16:30:37.549221178 +0000 UTC m=+932.013432934" observedRunningTime="2025-12-12 16:30:37.94638174 +0000 UTC m=+932.410593526" watchObservedRunningTime="2025-12-12 16:30:37.949305619 +0000 UTC m=+932.413517385" Dec 12 16:30:37 crc kubenswrapper[5116]: I1212 16:30:37.983975 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-44mb8" podStartSLOduration=13.685039393 podStartE2EDuration="41.983941833s" podCreationTimestamp="2025-12-12 16:29:56 +0000 UTC" firstStartedPulling="2025-12-12 16:29:57.563861892 +0000 UTC m=+892.028073648" lastFinishedPulling="2025-12-12 16:30:25.862764332 +0000 UTC m=+920.326976088" observedRunningTime="2025-12-12 16:30:37.977958931 +0000 UTC m=+932.442170687" watchObservedRunningTime="2025-12-12 16:30:37.983941833 +0000 UTC m=+932.448153589" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.978800 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.980352 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1251e511-2ae5-4caf-9d80-e00363cfe0ef" containerName="collect-profiles" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.980371 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="1251e511-2ae5-4caf-9d80-e00363cfe0ef" containerName="collect-profiles" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.980521 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="1251e511-2ae5-4caf-9d80-e00363cfe0ef" containerName="collect-profiles" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.994275 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.994486 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.998008 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.998073 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.998481 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.999776 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 12 16:31:05 crc kubenswrapper[5116]: I1212 16:31:05.999938 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:05.999954 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-k5mk5\"" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.000266 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.063318 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.063905 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.063986 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvp2\" (UniqueName: \"kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.064038 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.064085 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.064153 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.064244 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.165882 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166015 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166096 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166346 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166390 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxvp2\" (UniqueName: \"kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166459 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166502 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.166906 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.173797 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.174822 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.175474 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.175672 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.175828 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.190539 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxvp2\" (UniqueName: \"kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2\") pod \"default-interconnect-55bf8d5cb-njxnn\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.352396 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.782506 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:31:06 crc kubenswrapper[5116]: I1212 16:31:06.794834 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:31:07 crc kubenswrapper[5116]: I1212 16:31:07.134976 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" event={"ID":"f185ad12-1126-40aa-929f-632af4f9cfe4","Type":"ContainerStarted","Data":"0ad154fdb96264fca09bd548bb9940d50a14af083e64a87144ca06427855c259"} Dec 12 16:31:14 crc kubenswrapper[5116]: I1212 16:31:14.203078 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" event={"ID":"f185ad12-1126-40aa-929f-632af4f9cfe4","Type":"ContainerStarted","Data":"baadea4e9ddb8d2bbb9234ce9c486264747ccca46a25d705491bbb9f555c6ed3"} Dec 12 16:31:14 crc kubenswrapper[5116]: I1212 16:31:14.240440 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" podStartSLOduration=2.904209221 podStartE2EDuration="9.240413007s" podCreationTimestamp="2025-12-12 16:31:05 +0000 UTC" firstStartedPulling="2025-12-12 16:31:06.795085414 +0000 UTC m=+961.259297170" lastFinishedPulling="2025-12-12 16:31:13.1312892 +0000 UTC m=+967.595500956" observedRunningTime="2025-12-12 16:31:14.222424763 +0000 UTC m=+968.686636559" watchObservedRunningTime="2025-12-12 16:31:14.240413007 +0000 UTC m=+968.704624783" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.831326 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.904947 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.905223 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.908956 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.908985 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.909406 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-7lf2f\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.909489 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.910418 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.910447 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.911008 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.911720 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.976407 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977004 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977120 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977149 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977169 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977226 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config-out\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977314 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-tls-assets\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977360 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-web-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977380 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhf8g\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-kube-api-access-nhf8g\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:18 crc kubenswrapper[5116]: I1212 16:31:18.977450 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079547 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-tls-assets\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079611 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-web-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079638 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhf8g\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-kube-api-access-nhf8g\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079688 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079731 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079811 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079859 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079884 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079909 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.079935 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config-out\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: E1212 16:31:19.081274 5116 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 16:31:19 crc kubenswrapper[5116]: E1212 16:31:19.081361 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls podName:7f2e62be-a53e-4a2f-ad15-10c4e33c351c nodeName:}" failed. No retries permitted until 2025-12-12 16:31:19.581334591 +0000 UTC m=+974.045546347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7f2e62be-a53e-4a2f-ad15-10c4e33c351c") : secret "default-prometheus-proxy-tls" not found Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.081609 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.081853 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.085076 5116 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.085129 5116 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1cf5b00c95b671b65e71ba8db0dbbc32aaf524fb445a8d79d3899063da35172/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.092839 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config-out\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.093202 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.093254 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.093474 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-tls-assets\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.094347 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-web-config\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.101575 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhf8g\" (UniqueName: \"kubernetes.io/projected/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-kube-api-access-nhf8g\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.119978 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb04a604-91a5-41fd-b3c5-bb161799a958\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: I1212 16:31:19.588388 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:19 crc kubenswrapper[5116]: E1212 16:31:19.588648 5116 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 16:31:19 crc kubenswrapper[5116]: E1212 16:31:19.591683 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls podName:7f2e62be-a53e-4a2f-ad15-10c4e33c351c nodeName:}" failed. No retries permitted until 2025-12-12 16:31:20.59157949 +0000 UTC m=+975.055791286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7f2e62be-a53e-4a2f-ad15-10c4e33c351c") : secret "default-prometheus-proxy-tls" not found Dec 12 16:31:20 crc kubenswrapper[5116]: I1212 16:31:20.605090 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:20 crc kubenswrapper[5116]: I1212 16:31:20.614386 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f2e62be-a53e-4a2f-ad15-10c4e33c351c-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7f2e62be-a53e-4a2f-ad15-10c4e33c351c\") " pod="service-telemetry/prometheus-default-0" Dec 12 16:31:20 crc kubenswrapper[5116]: I1212 16:31:20.730610 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 16:31:21 crc kubenswrapper[5116]: I1212 16:31:21.005391 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 16:31:21 crc kubenswrapper[5116]: W1212 16:31:21.006178 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f2e62be_a53e_4a2f_ad15_10c4e33c351c.slice/crio-f9306490e8f2aa983a3ad733289a0a1750da9f52616681498eee933994808a48 WatchSource:0}: Error finding container f9306490e8f2aa983a3ad733289a0a1750da9f52616681498eee933994808a48: Status 404 returned error can't find the container with id f9306490e8f2aa983a3ad733289a0a1750da9f52616681498eee933994808a48 Dec 12 16:31:21 crc kubenswrapper[5116]: I1212 16:31:21.276394 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerStarted","Data":"f9306490e8f2aa983a3ad733289a0a1750da9f52616681498eee933994808a48"} Dec 12 16:31:29 crc kubenswrapper[5116]: I1212 16:31:29.344745 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerStarted","Data":"42f46ed99f85ae4309384e0fc370894458a90f4b4d3cb0ee73b3940f91bf01fb"} Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.537705 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94"] Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.567422 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94"] Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.567581 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.675647 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqhs\" (UniqueName: \"kubernetes.io/projected/b52692c4-f40c-4757-b2ba-c7683432ab6d-kube-api-access-7zqhs\") pod \"default-snmp-webhook-6774d8dfbc-2ft94\" (UID: \"b52692c4-f40c-4757-b2ba-c7683432ab6d\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.777666 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqhs\" (UniqueName: \"kubernetes.io/projected/b52692c4-f40c-4757-b2ba-c7683432ab6d-kube-api-access-7zqhs\") pod \"default-snmp-webhook-6774d8dfbc-2ft94\" (UID: \"b52692c4-f40c-4757-b2ba-c7683432ab6d\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.800167 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqhs\" (UniqueName: \"kubernetes.io/projected/b52692c4-f40c-4757-b2ba-c7683432ab6d-kube-api-access-7zqhs\") pod \"default-snmp-webhook-6774d8dfbc-2ft94\" (UID: \"b52692c4-f40c-4757-b2ba-c7683432ab6d\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" Dec 12 16:31:30 crc kubenswrapper[5116]: I1212 16:31:30.899957 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" Dec 12 16:31:31 crc kubenswrapper[5116]: I1212 16:31:31.343984 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94"] Dec 12 16:31:32 crc kubenswrapper[5116]: I1212 16:31:32.367063 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" event={"ID":"b52692c4-f40c-4757-b2ba-c7683432ab6d","Type":"ContainerStarted","Data":"bd64118bee767ca696ea4506f2c3525a62f0a523d6f373f5eb18bb926f167bd0"} Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.846549 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.857563 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.861210 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.861466 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.861982 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.862002 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-7rhsp\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.862043 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.862354 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.863689 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.933217 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.933329 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-volume\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.933400 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fb75937-b354-4c08-a892-a1491914348e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fb75937-b354-4c08-a892-a1491914348e\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.933429 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.933568 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.934213 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-out\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.934316 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjlf\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-kube-api-access-jjjlf\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.934361 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-web-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:33 crc kubenswrapper[5116]: I1212 16:31:33.934423 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.035975 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-volume\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036078 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-2fb75937-b354-4c08-a892-a1491914348e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fb75937-b354-4c08-a892-a1491914348e\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036130 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036389 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036440 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-out\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036472 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjlf\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-kube-api-access-jjjlf\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036502 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-web-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.036583 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: E1212 16:31:34.037018 5116 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:34 crc kubenswrapper[5116]: E1212 16:31:34.037167 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls podName:9c8f3436-2ac8-48a7-a30f-fb5e454fbc23 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:34.537136492 +0000 UTC m=+989.001348258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9c8f3436-2ac8-48a7-a30f-fb5e454fbc23") : secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044187 5116 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044237 5116 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-2fb75937-b354-4c08-a892-a1491914348e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fb75937-b354-4c08-a892-a1491914348e\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4446bb206167b0a44d538aa0248ba6281f910d03b8febf08d7cc061a84f28f60/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044190 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-web-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044299 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-volume\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044573 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044616 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-config-out\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.044691 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.047826 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.058185 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjlf\" (UniqueName: \"kubernetes.io/projected/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-kube-api-access-jjjlf\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.071370 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-2fb75937-b354-4c08-a892-a1491914348e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fb75937-b354-4c08-a892-a1491914348e\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: I1212 16:31:34.544158 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:34 crc kubenswrapper[5116]: E1212 16:31:34.544384 5116 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:34 crc kubenswrapper[5116]: E1212 16:31:34.544672 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls podName:9c8f3436-2ac8-48a7-a30f-fb5e454fbc23 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:35.544628088 +0000 UTC m=+990.008839844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9c8f3436-2ac8-48a7-a30f-fb5e454fbc23") : secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:35 crc kubenswrapper[5116]: I1212 16:31:35.562489 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:35 crc kubenswrapper[5116]: E1212 16:31:35.562710 5116 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:35 crc kubenswrapper[5116]: E1212 16:31:35.562816 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls podName:9c8f3436-2ac8-48a7-a30f-fb5e454fbc23 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:37.562793453 +0000 UTC m=+992.027005209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9c8f3436-2ac8-48a7-a30f-fb5e454fbc23") : secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:37 crc kubenswrapper[5116]: I1212 16:31:37.597000 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:37 crc kubenswrapper[5116]: E1212 16:31:37.597326 5116 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:37 crc kubenswrapper[5116]: E1212 16:31:37.597462 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls podName:9c8f3436-2ac8-48a7-a30f-fb5e454fbc23 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:41.597418019 +0000 UTC m=+996.061629775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9c8f3436-2ac8-48a7-a30f-fb5e454fbc23") : secret "default-alertmanager-proxy-tls" not found Dec 12 16:31:38 crc kubenswrapper[5116]: I1212 16:31:38.412840 5116 generic.go:358] "Generic (PLEG): container finished" podID="7f2e62be-a53e-4a2f-ad15-10c4e33c351c" containerID="42f46ed99f85ae4309384e0fc370894458a90f4b4d3cb0ee73b3940f91bf01fb" exitCode=0 Dec 12 16:31:38 crc kubenswrapper[5116]: I1212 16:31:38.413252 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerDied","Data":"42f46ed99f85ae4309384e0fc370894458a90f4b4d3cb0ee73b3940f91bf01fb"} Dec 12 16:31:41 crc kubenswrapper[5116]: I1212 16:31:41.648148 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:41 crc kubenswrapper[5116]: I1212 16:31:41.658658 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c8f3436-2ac8-48a7-a30f-fb5e454fbc23-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23\") " pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:41 crc kubenswrapper[5116]: I1212 16:31:41.686290 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.305238 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.416360 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.416436 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:31:49 crc kubenswrapper[5116]: W1212 16:31:49.425851 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c8f3436_2ac8_48a7_a30f_fb5e454fbc23.slice/crio-9cad8f28affc49b7e9148718f84194ea3660b08862da5d4693d8275a1519c61d WatchSource:0}: Error finding container 9cad8f28affc49b7e9148718f84194ea3660b08862da5d4693d8275a1519c61d: Status 404 returned error can't find the container with id 9cad8f28affc49b7e9148718f84194ea3660b08862da5d4693d8275a1519c61d Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.512881 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerStarted","Data":"9cad8f28affc49b7e9148718f84194ea3660b08862da5d4693d8275a1519c61d"} Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.515869 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" event={"ID":"b52692c4-f40c-4757-b2ba-c7683432ab6d","Type":"ContainerStarted","Data":"d6c6df9c8bbc5621e07ed0abadaa7e87573cb4c75d6d9328481851568bb3d12d"} Dec 12 16:31:49 crc kubenswrapper[5116]: I1212 16:31:49.538881 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-2ft94" podStartSLOduration=1.860888731 podStartE2EDuration="19.538859085s" podCreationTimestamp="2025-12-12 16:31:30 +0000 UTC" firstStartedPulling="2025-12-12 16:31:31.374830164 +0000 UTC m=+985.839041920" lastFinishedPulling="2025-12-12 16:31:49.052800518 +0000 UTC m=+1003.517012274" observedRunningTime="2025-12-12 16:31:49.531783385 +0000 UTC m=+1003.995995141" watchObservedRunningTime="2025-12-12 16:31:49.538859085 +0000 UTC m=+1004.003070841" Dec 12 16:31:52 crc kubenswrapper[5116]: I1212 16:31:52.546554 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerStarted","Data":"78dafc35ad21a860b3c0ab7d6a0b014ca70931e7eb0903ccca022c25cf4a8bc4"} Dec 12 16:31:53 crc kubenswrapper[5116]: I1212 16:31:53.981775 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs"] Dec 12 16:31:53 crc kubenswrapper[5116]: I1212 16:31:53.998329 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs"] Dec 12 16:31:53 crc kubenswrapper[5116]: I1212 16:31:53.998513 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.005510 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.005523 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.005601 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-hzhrp\"" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.005622 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.128191 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.128255 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5qr\" (UniqueName: \"kubernetes.io/projected/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-kube-api-access-2j5qr\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.128349 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.128442 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.128463 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.230339 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.230438 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.230470 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.230532 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.230559 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5qr\" (UniqueName: \"kubernetes.io/projected/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-kube-api-access-2j5qr\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: E1212 16:31:54.230999 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 16:31:54 crc kubenswrapper[5116]: E1212 16:31:54.231091 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls podName:5d5d9511-fa2b-4ece-9cb7-24c530042ec9 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:54.731064042 +0000 UTC m=+1009.195275798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" (UID: "5d5d9511-fa2b-4ece-9cb7-24c530042ec9") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.231659 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.232076 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.238953 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.249447 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5qr\" (UniqueName: \"kubernetes.io/projected/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-kube-api-access-2j5qr\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: I1212 16:31:54.736605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:54 crc kubenswrapper[5116]: E1212 16:31:54.736840 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 16:31:54 crc kubenswrapper[5116]: E1212 16:31:54.737084 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls podName:5d5d9511-fa2b-4ece-9cb7-24c530042ec9 nodeName:}" failed. No retries permitted until 2025-12-12 16:31:55.737063887 +0000 UTC m=+1010.201275643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" (UID: "5d5d9511-fa2b-4ece-9cb7-24c530042ec9") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 16:31:55 crc kubenswrapper[5116]: I1212 16:31:55.570780 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerStarted","Data":"ab7124f5903e631e199df6cc63268a59431dc5366a788736f0d2a7993bac6338"} Dec 12 16:31:55 crc kubenswrapper[5116]: I1212 16:31:55.754329 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:55 crc kubenswrapper[5116]: I1212 16:31:55.760883 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5d9511-fa2b-4ece-9cb7-24c530042ec9-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-vz8fs\" (UID: \"5d5d9511-fa2b-4ece-9cb7-24c530042ec9\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:55 crc kubenswrapper[5116]: I1212 16:31:55.830462 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" Dec 12 16:31:56 crc kubenswrapper[5116]: I1212 16:31:56.103248 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs"] Dec 12 16:31:56 crc kubenswrapper[5116]: W1212 16:31:56.105502 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d5d9511_fa2b_4ece_9cb7_24c530042ec9.slice/crio-298b966a3d9b3f32c06572ee800305b7771731ea23903f9e7cb1254b2dbf0790 WatchSource:0}: Error finding container 298b966a3d9b3f32c06572ee800305b7771731ea23903f9e7cb1254b2dbf0790: Status 404 returned error can't find the container with id 298b966a3d9b3f32c06572ee800305b7771731ea23903f9e7cb1254b2dbf0790 Dec 12 16:31:56 crc kubenswrapper[5116]: I1212 16:31:56.580820 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerStarted","Data":"298b966a3d9b3f32c06572ee800305b7771731ea23903f9e7cb1254b2dbf0790"} Dec 12 16:31:57 crc kubenswrapper[5116]: I1212 16:31:57.596316 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerStarted","Data":"94b38ce969d2186f4d9a1d4c51730fb44b1ac4048d388a811b98ae5872e21f59"} Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.472278 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l"] Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.484435 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.484925 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l"] Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.488137 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.488173 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.490807 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82c5432-4689-446e-a0c8-e61ae4f6335e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.490954 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.491022 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925z9\" (UniqueName: \"kubernetes.io/projected/a82c5432-4689-446e-a0c8-e61ae4f6335e-kube-api-access-925z9\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.491168 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.491226 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82c5432-4689-446e-a0c8-e61ae4f6335e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.592781 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.592836 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-925z9\" (UniqueName: \"kubernetes.io/projected/a82c5432-4689-446e-a0c8-e61ae4f6335e-kube-api-access-925z9\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.592908 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.592931 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82c5432-4689-446e-a0c8-e61ae4f6335e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.592969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82c5432-4689-446e-a0c8-e61ae4f6335e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: E1212 16:31:58.593538 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 16:31:58 crc kubenswrapper[5116]: E1212 16:31:58.593687 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls podName:a82c5432-4689-446e-a0c8-e61ae4f6335e nodeName:}" failed. No retries permitted until 2025-12-12 16:31:59.093643987 +0000 UTC m=+1013.557855743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" (UID: "a82c5432-4689-446e-a0c8-e61ae4f6335e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.593926 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82c5432-4689-446e-a0c8-e61ae4f6335e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.594066 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82c5432-4689-446e-a0c8-e61ae4f6335e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.604590 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:58 crc kubenswrapper[5116]: I1212 16:31:58.609934 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-925z9\" (UniqueName: \"kubernetes.io/projected/a82c5432-4689-446e-a0c8-e61ae4f6335e-kube-api-access-925z9\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:59 crc kubenswrapper[5116]: E1212 16:31:59.047079 5116 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 16:31:59 crc kubenswrapper[5116]: I1212 16:31:59.100846 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:31:59 crc kubenswrapper[5116]: E1212 16:31:59.101620 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 16:31:59 crc kubenswrapper[5116]: E1212 16:31:59.102597 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls podName:a82c5432-4689-446e-a0c8-e61ae4f6335e nodeName:}" failed. No retries permitted until 2025-12-12 16:32:00.10256932 +0000 UTC m=+1014.566781086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" (UID: "a82c5432-4689-446e-a0c8-e61ae4f6335e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 16:32:00 crc kubenswrapper[5116]: I1212 16:32:00.120548 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:32:00 crc kubenswrapper[5116]: I1212 16:32:00.151683 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82c5432-4689-446e-a0c8-e61ae4f6335e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l\" (UID: \"a82c5432-4689-446e-a0c8-e61ae4f6335e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:32:00 crc kubenswrapper[5116]: I1212 16:32:00.323806 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" Dec 12 16:32:00 crc kubenswrapper[5116]: I1212 16:32:00.631193 5116 generic.go:358] "Generic (PLEG): container finished" podID="9c8f3436-2ac8-48a7-a30f-fb5e454fbc23" containerID="78dafc35ad21a860b3c0ab7d6a0b014ca70931e7eb0903ccca022c25cf4a8bc4" exitCode=0 Dec 12 16:32:00 crc kubenswrapper[5116]: I1212 16:32:00.631516 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerDied","Data":"78dafc35ad21a860b3c0ab7d6a0b014ca70931e7eb0903ccca022c25cf4a8bc4"} Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.126832 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.139045 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.161991 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51856: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.193785 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51872: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.231611 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51878: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.279908 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51894: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.355619 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51896: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.468993 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51912: no serving certificate available for the kubelet" Dec 12 16:32:01 crc kubenswrapper[5116]: I1212 16:32:01.655575 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51926: no serving certificate available for the kubelet" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.019561 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51930: no serving certificate available for the kubelet" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.708491 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp"] Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.714657 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51932: no serving certificate available for the kubelet" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.753935 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp"] Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.754162 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.758552 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.759303 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.879141 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.879251 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.879275 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.879318 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.879382 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7764\" (UniqueName: \"kubernetes.io/projected/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-kube-api-access-p7764\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981189 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7764\" (UniqueName: \"kubernetes.io/projected/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-kube-api-access-p7764\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981280 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981433 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: E1212 16:32:02.981438 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981475 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: E1212 16:32:02.981534 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls podName:25ee0738-c2d6-4454-9603-7e7f4b7d2e18 nodeName:}" failed. No retries permitted until 2025-12-12 16:32:03.481511084 +0000 UTC m=+1017.945722840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" (UID: "25ee0738-c2d6-4454-9603-7e7f4b7d2e18") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.981988 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.982816 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:02 crc kubenswrapper[5116]: I1212 16:32:02.991205 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:03 crc kubenswrapper[5116]: I1212 16:32:03.002557 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7764\" (UniqueName: \"kubernetes.io/projected/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-kube-api-access-p7764\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:03 crc kubenswrapper[5116]: I1212 16:32:03.358216 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l"] Dec 12 16:32:03 crc kubenswrapper[5116]: I1212 16:32:03.491954 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:03 crc kubenswrapper[5116]: E1212 16:32:03.492210 5116 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 16:32:03 crc kubenswrapper[5116]: E1212 16:32:03.492315 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls podName:25ee0738-c2d6-4454-9603-7e7f4b7d2e18 nodeName:}" failed. No retries permitted until 2025-12-12 16:32:04.492292477 +0000 UTC m=+1018.956504233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" (UID: "25ee0738-c2d6-4454-9603-7e7f4b7d2e18") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 16:32:04 crc kubenswrapper[5116]: I1212 16:32:04.032431 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51948: no serving certificate available for the kubelet" Dec 12 16:32:04 crc kubenswrapper[5116]: I1212 16:32:04.506602 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:04 crc kubenswrapper[5116]: I1212 16:32:04.514082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25ee0738-c2d6-4454-9603-7e7f4b7d2e18-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp\" (UID: \"25ee0738-c2d6-4454-9603-7e7f4b7d2e18\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:04 crc kubenswrapper[5116]: I1212 16:32:04.581305 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" Dec 12 16:32:05 crc kubenswrapper[5116]: I1212 16:32:05.675141 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerStarted","Data":"22fc61f3f30a37ad3f5dd867c669966cec73cdd738fcb4f3f3a374fb2bbf66c8"} Dec 12 16:32:06 crc kubenswrapper[5116]: I1212 16:32:06.627993 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51958: no serving certificate available for the kubelet" Dec 12 16:32:06 crc kubenswrapper[5116]: I1212 16:32:06.767374 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp"] Dec 12 16:32:07 crc kubenswrapper[5116]: W1212 16:32:07.011895 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25ee0738_c2d6_4454_9603_7e7f4b7d2e18.slice/crio-933ec18a0a47e2a91f22d328ebae5fb17f9f029fa02b0d8cc9779aa83a8dd0a6 WatchSource:0}: Error finding container 933ec18a0a47e2a91f22d328ebae5fb17f9f029fa02b0d8cc9779aa83a8dd0a6: Status 404 returned error can't find the container with id 933ec18a0a47e2a91f22d328ebae5fb17f9f029fa02b0d8cc9779aa83a8dd0a6 Dec 12 16:32:07 crc kubenswrapper[5116]: I1212 16:32:07.697284 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerStarted","Data":"f53b22d31b0dd7166b256d688161b469722d0efb9d933ca31cd6da746079a0dd"} Dec 12 16:32:07 crc kubenswrapper[5116]: I1212 16:32:07.699381 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"933ec18a0a47e2a91f22d328ebae5fb17f9f029fa02b0d8cc9779aa83a8dd0a6"} Dec 12 16:32:08 crc kubenswrapper[5116]: I1212 16:32:08.708138 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerStarted","Data":"083a76fb3fc28a08698fb07b92699660211c509946632573abf7c3d63eb987de"} Dec 12 16:32:08 crc kubenswrapper[5116]: I1212 16:32:08.710087 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"96cacae2dd8fb0b7125e150c898a35cca1edc061ac311211ca53823960e639a2"} Dec 12 16:32:08 crc kubenswrapper[5116]: I1212 16:32:08.714287 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7f2e62be-a53e-4a2f-ad15-10c4e33c351c","Type":"ContainerStarted","Data":"f0cece3301a56e90ae98e981f40536921e4995f0846b9f4951e5f8e382ce9a4b"} Dec 12 16:32:08 crc kubenswrapper[5116]: I1212 16:32:08.747945 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.393441889 podStartE2EDuration="51.747918056s" podCreationTimestamp="2025-12-12 16:31:17 +0000 UTC" firstStartedPulling="2025-12-12 16:31:21.009707503 +0000 UTC m=+975.473919259" lastFinishedPulling="2025-12-12 16:32:07.36418367 +0000 UTC m=+1021.828395426" observedRunningTime="2025-12-12 16:32:08.747185696 +0000 UTC m=+1023.211397452" watchObservedRunningTime="2025-12-12 16:32:08.747918056 +0000 UTC m=+1023.212129832" Dec 12 16:32:10 crc kubenswrapper[5116]: I1212 16:32:10.731320 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 12 16:32:11 crc kubenswrapper[5116]: I1212 16:32:11.345752 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl"] Dec 12 16:32:11 crc kubenswrapper[5116]: I1212 16:32:11.773277 5116 ???:1] "http: TLS handshake error from 192.168.126.11:56272: no serving certificate available for the kubelet" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.370396 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl"] Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.370465 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t"] Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.371094 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.376059 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.377382 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.452262 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwhd\" (UniqueName: \"kubernetes.io/projected/ba7256f3-76be-407c-8011-5323c7ee98e0-kube-api-access-nmwhd\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.452385 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ba7256f3-76be-407c-8011-5323c7ee98e0-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.452414 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ba7256f3-76be-407c-8011-5323c7ee98e0-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.452454 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba7256f3-76be-407c-8011-5323c7ee98e0-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.554175 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwhd\" (UniqueName: \"kubernetes.io/projected/ba7256f3-76be-407c-8011-5323c7ee98e0-kube-api-access-nmwhd\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.554340 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ba7256f3-76be-407c-8011-5323c7ee98e0-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.554391 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ba7256f3-76be-407c-8011-5323c7ee98e0-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.554466 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba7256f3-76be-407c-8011-5323c7ee98e0-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.555384 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba7256f3-76be-407c-8011-5323c7ee98e0-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.555665 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ba7256f3-76be-407c-8011-5323c7ee98e0-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.574435 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ba7256f3-76be-407c-8011-5323c7ee98e0-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.577444 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwhd\" (UniqueName: \"kubernetes.io/projected/ba7256f3-76be-407c-8011-5323c7ee98e0-kube-api-access-nmwhd\") pod \"default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl\" (UID: \"ba7256f3-76be-407c-8011-5323c7ee98e0\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.637120 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t"] Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.637891 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.642697 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.699812 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.757009 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/85533c93-a2c1-44e9-ad48-9b1140940386-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.757139 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/85533c93-a2c1-44e9-ad48-9b1140940386-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.757632 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/85533c93-a2c1-44e9-ad48-9b1140940386-kube-api-access-vm5bj\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.758066 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/85533c93-a2c1-44e9-ad48-9b1140940386-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.859853 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/85533c93-a2c1-44e9-ad48-9b1140940386-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.859936 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/85533c93-a2c1-44e9-ad48-9b1140940386-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.859981 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/85533c93-a2c1-44e9-ad48-9b1140940386-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.860013 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/85533c93-a2c1-44e9-ad48-9b1140940386-kube-api-access-vm5bj\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.861247 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/85533c93-a2c1-44e9-ad48-9b1140940386-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.862601 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/85533c93-a2c1-44e9-ad48-9b1140940386-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.866285 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/85533c93-a2c1-44e9-ad48-9b1140940386-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.879197 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5bj\" (UniqueName: \"kubernetes.io/projected/85533c93-a2c1-44e9-ad48-9b1140940386-kube-api-access-vm5bj\") pod \"default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t\" (UID: \"85533c93-a2c1-44e9-ad48-9b1140940386\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:12 crc kubenswrapper[5116]: I1212 16:32:12.962239 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" Dec 12 16:32:19 crc kubenswrapper[5116]: I1212 16:32:19.416469 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:19 crc kubenswrapper[5116]: I1212 16:32:19.417425 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.530096 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t"] Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.732183 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.789676 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.790165 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl"] Dec 12 16:32:20 crc kubenswrapper[5116]: W1212 16:32:20.795333 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7256f3_76be_407c_8011_5323c7ee98e0.slice/crio-cd8ca8090b655c1e06e25157eea5a55feccb7ed18a38c40832a1277937c2689d WatchSource:0}: Error finding container cd8ca8090b655c1e06e25157eea5a55feccb7ed18a38c40832a1277937c2689d: Status 404 returned error can't find the container with id cd8ca8090b655c1e06e25157eea5a55feccb7ed18a38c40832a1277937c2689d Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.833349 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" event={"ID":"ba7256f3-76be-407c-8011-5323c7ee98e0","Type":"ContainerStarted","Data":"cd8ca8090b655c1e06e25157eea5a55feccb7ed18a38c40832a1277937c2689d"} Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.836268 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" event={"ID":"85533c93-a2c1-44e9-ad48-9b1140940386","Type":"ContainerStarted","Data":"7dc52844e3d390c9437a0772579634d55538b744f89b58c3d021ebe650818c55"} Dec 12 16:32:20 crc kubenswrapper[5116]: I1212 16:32:20.868432 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 12 16:32:21 crc kubenswrapper[5116]: I1212 16:32:21.882336 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"d32b7893b94364e43e980428810118fb7ef837092da6f7238788ae42b7589ae9"} Dec 12 16:32:21 crc kubenswrapper[5116]: I1212 16:32:21.884896 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerStarted","Data":"94520b083b2599f0b9e2f38ef6a5dfc2bf04d48b9ce310fefd2d39ed15258a32"} Dec 12 16:32:21 crc kubenswrapper[5116]: I1212 16:32:21.887034 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerStarted","Data":"95120a657a4ae0d2dc907abb45201a0a26ca763cf4a1c102b8f468078db4d7be"} Dec 12 16:32:22 crc kubenswrapper[5116]: I1212 16:32:22.044369 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38124: no serving certificate available for the kubelet" Dec 12 16:32:22 crc kubenswrapper[5116]: I1212 16:32:22.897820 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" event={"ID":"ba7256f3-76be-407c-8011-5323c7ee98e0","Type":"ContainerStarted","Data":"f2a2cd89677e50497468e1bff09a4030ece4f36a13ac65b8396590ad1a1f1830"} Dec 12 16:32:22 crc kubenswrapper[5116]: I1212 16:32:22.900336 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerStarted","Data":"9744db5221040caa576057089b0ad32686ffe2f1983c2400c43829f49dcd620b"} Dec 12 16:32:22 crc kubenswrapper[5116]: I1212 16:32:22.901917 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" event={"ID":"85533c93-a2c1-44e9-ad48-9b1140940386","Type":"ContainerStarted","Data":"ea84c2c38d5647b0a42a1fde0567bdf4210f257c6dc1aab6b7adbeab89404ee9"} Dec 12 16:32:23 crc kubenswrapper[5116]: I1212 16:32:23.936480 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerStarted","Data":"ae42ba35e1c8c6fb058847c15f4ad36f808e685d67a784d346f9e6176307366f"} Dec 12 16:32:28 crc kubenswrapper[5116]: I1212 16:32:28.764764 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:32:28 crc kubenswrapper[5116]: I1212 16:32:28.766540 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" podUID="f185ad12-1126-40aa-929f-632af4f9cfe4" containerName="default-interconnect" containerID="cri-o://baadea4e9ddb8d2bbb9234ce9c486264747ccca46a25d705491bbb9f555c6ed3" gracePeriod=30 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.012015 5116 generic.go:358] "Generic (PLEG): container finished" podID="f185ad12-1126-40aa-929f-632af4f9cfe4" containerID="baadea4e9ddb8d2bbb9234ce9c486264747ccca46a25d705491bbb9f555c6ed3" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.012140 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" event={"ID":"f185ad12-1126-40aa-929f-632af4f9cfe4","Type":"ContainerDied","Data":"baadea4e9ddb8d2bbb9234ce9c486264747ccca46a25d705491bbb9f555c6ed3"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.016345 5116 generic.go:358] "Generic (PLEG): container finished" podID="ba7256f3-76be-407c-8011-5323c7ee98e0" containerID="f2a2cd89677e50497468e1bff09a4030ece4f36a13ac65b8396590ad1a1f1830" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.016475 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" event={"ID":"ba7256f3-76be-407c-8011-5323c7ee98e0","Type":"ContainerDied","Data":"f2a2cd89677e50497468e1bff09a4030ece4f36a13ac65b8396590ad1a1f1830"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.018461 5116 generic.go:358] "Generic (PLEG): container finished" podID="a82c5432-4689-446e-a0c8-e61ae4f6335e" containerID="94520b083b2599f0b9e2f38ef6a5dfc2bf04d48b9ce310fefd2d39ed15258a32" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.018506 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerDied","Data":"94520b083b2599f0b9e2f38ef6a5dfc2bf04d48b9ce310fefd2d39ed15258a32"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.020525 5116 generic.go:358] "Generic (PLEG): container finished" podID="5d5d9511-fa2b-4ece-9cb7-24c530042ec9" containerID="9744db5221040caa576057089b0ad32686ffe2f1983c2400c43829f49dcd620b" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.020605 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerDied","Data":"9744db5221040caa576057089b0ad32686ffe2f1983c2400c43829f49dcd620b"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.022495 5116 generic.go:358] "Generic (PLEG): container finished" podID="85533c93-a2c1-44e9-ad48-9b1140940386" containerID="ea84c2c38d5647b0a42a1fde0567bdf4210f257c6dc1aab6b7adbeab89404ee9" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.022635 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" event={"ID":"85533c93-a2c1-44e9-ad48-9b1140940386","Type":"ContainerDied","Data":"ea84c2c38d5647b0a42a1fde0567bdf4210f257c6dc1aab6b7adbeab89404ee9"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.024537 5116 generic.go:358] "Generic (PLEG): container finished" podID="25ee0738-c2d6-4454-9603-7e7f4b7d2e18" containerID="d32b7893b94364e43e980428810118fb7ef837092da6f7238788ae42b7589ae9" exitCode=0 Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.024565 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerDied","Data":"d32b7893b94364e43e980428810118fb7ef837092da6f7238788ae42b7589ae9"} Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.096324 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.146015 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-pnctf"] Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.147822 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f185ad12-1126-40aa-929f-632af4f9cfe4" containerName="default-interconnect" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.147853 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f185ad12-1126-40aa-929f-632af4f9cfe4" containerName="default-interconnect" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.148081 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f185ad12-1126-40aa-929f-632af4f9cfe4" containerName="default-interconnect" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.152607 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.157729 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxvp2\" (UniqueName: \"kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.157811 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.157950 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.158082 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.158258 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.158287 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.158309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users\") pod \"f185ad12-1126-40aa-929f-632af4f9cfe4\" (UID: \"f185ad12-1126-40aa-929f-632af4f9cfe4\") " Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.158744 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.163820 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-pnctf"] Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.187852 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.187859 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2" (OuterVolumeSpecName: "kube-api-access-xxvp2") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "kube-api-access-xxvp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.198482 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.199651 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.202304 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.202390 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "f185ad12-1126-40aa-929f-632af4f9cfe4" (UID: "f185ad12-1126-40aa-929f-632af4f9cfe4"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262487 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262560 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gq5f\" (UniqueName: \"kubernetes.io/projected/c22d66c8-369b-4e8d-82a6-aeafd05448b6-kube-api-access-6gq5f\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262599 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262631 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262748 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-config\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262781 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262835 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-users\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262888 5116 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262904 5116 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262922 5116 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.262938 5116 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.263055 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxvp2\" (UniqueName: \"kubernetes.io/projected/f185ad12-1126-40aa-929f-632af4f9cfe4-kube-api-access-xxvp2\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.263070 5116 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f185ad12-1126-40aa-929f-632af4f9cfe4-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.263147 5116 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f185ad12-1126-40aa-929f-632af4f9cfe4-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364404 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-config\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364823 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364874 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-users\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364906 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364935 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6gq5f\" (UniqueName: \"kubernetes.io/projected/c22d66c8-369b-4e8d-82a6-aeafd05448b6-kube-api-access-6gq5f\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364966 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.364991 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.366086 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-config\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.370742 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.370742 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-sasl-users\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.371446 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.373156 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.374594 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c22d66c8-369b-4e8d-82a6-aeafd05448b6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.384333 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gq5f\" (UniqueName: \"kubernetes.io/projected/c22d66c8-369b-4e8d-82a6-aeafd05448b6-kube-api-access-6gq5f\") pod \"default-interconnect-55bf8d5cb-pnctf\" (UID: \"c22d66c8-369b-4e8d-82a6-aeafd05448b6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.473777 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" Dec 12 16:32:30 crc kubenswrapper[5116]: I1212 16:32:30.989240 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-pnctf"] Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.035599 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"121ffd15d1f3eb04be681d39165a2e593af9bc0af8e5709f8b718da8595bfdbc"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.037530 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" event={"ID":"f185ad12-1126-40aa-929f-632af4f9cfe4","Type":"ContainerDied","Data":"0ad154fdb96264fca09bd548bb9940d50a14af083e64a87144ca06427855c259"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.037587 5116 scope.go:117] "RemoveContainer" containerID="baadea4e9ddb8d2bbb9234ce9c486264747ccca46a25d705491bbb9f555c6ed3" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.037749 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-njxnn" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.043482 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" event={"ID":"ba7256f3-76be-407c-8011-5323c7ee98e0","Type":"ContainerStarted","Data":"3cebea6b69732675fbf578fc069ea3e1ab710845108e25222dc82b8410615976"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.045625 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerStarted","Data":"fa30dda773d0aae420c7a8b9ca128464a414007c1da1a730631e472e0e6c6b09"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.046485 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" event={"ID":"c22d66c8-369b-4e8d-82a6-aeafd05448b6","Type":"ContainerStarted","Data":"6b3d2dfc158f68934e3b062e860de7ffa9fd66cde0331b574dd75cee70324c6f"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.049411 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9c8f3436-2ac8-48a7-a30f-fb5e454fbc23","Type":"ContainerStarted","Data":"069e8f736a059a731a352e653befcaf71d2c5fb2736d7b38c206d63829b0fab8"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.054956 5116 scope.go:117] "RemoveContainer" containerID="f2a2cd89677e50497468e1bff09a4030ece4f36a13ac65b8396590ad1a1f1830" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.055561 5116 scope.go:117] "RemoveContainer" containerID="94520b083b2599f0b9e2f38ef6a5dfc2bf04d48b9ce310fefd2d39ed15258a32" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.056057 5116 scope.go:117] "RemoveContainer" containerID="d32b7893b94364e43e980428810118fb7ef837092da6f7238788ae42b7589ae9" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.057662 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerStarted","Data":"fa641f7cefbb389abce2f9474d7ca4d15179e0b976d73a929e8504573a269341"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.057903 5116 scope.go:117] "RemoveContainer" containerID="9744db5221040caa576057089b0ad32686ffe2f1983c2400c43829f49dcd620b" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.065627 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" event={"ID":"85533c93-a2c1-44e9-ad48-9b1140940386","Type":"ContainerStarted","Data":"0ddda491c20f8b0f80b58da62fec912793c5770c88544730fd3a453890a20f21"} Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.066420 5116 scope.go:117] "RemoveContainer" containerID="ea84c2c38d5647b0a42a1fde0567bdf4210f257c6dc1aab6b7adbeab89404ee9" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.132474 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=29.896312401 podStartE2EDuration="59.132447864s" podCreationTimestamp="2025-12-12 16:31:32 +0000 UTC" firstStartedPulling="2025-12-12 16:32:00.632807064 +0000 UTC m=+1015.097018820" lastFinishedPulling="2025-12-12 16:32:29.868942527 +0000 UTC m=+1044.333154283" observedRunningTime="2025-12-12 16:32:31.107743148 +0000 UTC m=+1045.571954994" watchObservedRunningTime="2025-12-12 16:32:31.132447864 +0000 UTC m=+1045.596659630" Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.139476 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:32:31 crc kubenswrapper[5116]: I1212 16:32:31.147238 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-njxnn"] Dec 12 16:32:32 crc kubenswrapper[5116]: I1212 16:32:32.056736 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f185ad12-1126-40aa-929f-632af4f9cfe4" path="/var/lib/kubelet/pods/f185ad12-1126-40aa-929f-632af4f9cfe4/volumes" Dec 12 16:32:32 crc kubenswrapper[5116]: I1212 16:32:32.078912 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" event={"ID":"c22d66c8-369b-4e8d-82a6-aeafd05448b6","Type":"ContainerStarted","Data":"1f0fd889ac4188c95035d04ccaff3feee4a2af622bbc532eae4125f808deae2b"} Dec 12 16:32:32 crc kubenswrapper[5116]: I1212 16:32:32.102538 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-pnctf" podStartSLOduration=4.102511643 podStartE2EDuration="4.102511643s" podCreationTimestamp="2025-12-12 16:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:32:32.09756741 +0000 UTC m=+1046.561779176" watchObservedRunningTime="2025-12-12 16:32:32.102511643 +0000 UTC m=+1046.566723399" Dec 12 16:32:33 crc kubenswrapper[5116]: I1212 16:32:33.090422 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24"} Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.106413 5116 generic.go:358] "Generic (PLEG): container finished" podID="25ee0738-c2d6-4454-9603-7e7f4b7d2e18" containerID="270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24" exitCode=0 Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.107265 5116 scope.go:117] "RemoveContainer" containerID="270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24" Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.106544 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerDied","Data":"270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24"} Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.107344 5116 scope.go:117] "RemoveContainer" containerID="d32b7893b94364e43e980428810118fb7ef837092da6f7238788ae42b7589ae9" Dec 12 16:32:34 crc kubenswrapper[5116]: E1212 16:32:34.107848 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp_service-telemetry(25ee0738-c2d6-4454-9603-7e7f4b7d2e18)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" podUID="25ee0738-c2d6-4454-9603-7e7f4b7d2e18" Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.111148 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" event={"ID":"ba7256f3-76be-407c-8011-5323c7ee98e0","Type":"ContainerStarted","Data":"302acf1df7742a3670e1715c88ac5bb97c8970fb70ae91423d30f174b12f91c9"} Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.128170 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" event={"ID":"a82c5432-4689-446e-a0c8-e61ae4f6335e","Type":"ContainerStarted","Data":"53f379c010cd6859dfa2ef689231376190c0be12c844fc03f1d38fc1e5f78704"} Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.130778 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" event={"ID":"85533c93-a2c1-44e9-ad48-9b1140940386","Type":"ContainerStarted","Data":"a681083c2eda59e5f01ff4a3a70e29abcadc71024bbb3d546d545dcd76fe49cb"} Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.162472 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-k4r8l" podStartSLOduration=9.025575037 podStartE2EDuration="36.162444161s" podCreationTimestamp="2025-12-12 16:31:58 +0000 UTC" firstStartedPulling="2025-12-12 16:32:05.458413667 +0000 UTC m=+1019.922625423" lastFinishedPulling="2025-12-12 16:32:32.595282791 +0000 UTC m=+1047.059494547" observedRunningTime="2025-12-12 16:32:34.16018881 +0000 UTC m=+1048.624400586" watchObservedRunningTime="2025-12-12 16:32:34.162444161 +0000 UTC m=+1048.626655917" Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.199872 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-788fdd8d9b-whzpl" podStartSLOduration=11.400958943 podStartE2EDuration="23.199843628s" podCreationTimestamp="2025-12-12 16:32:11 +0000 UTC" firstStartedPulling="2025-12-12 16:32:20.797348002 +0000 UTC m=+1035.261559758" lastFinishedPulling="2025-12-12 16:32:32.596232687 +0000 UTC m=+1047.060444443" observedRunningTime="2025-12-12 16:32:34.194587337 +0000 UTC m=+1048.658799103" watchObservedRunningTime="2025-12-12 16:32:34.199843628 +0000 UTC m=+1048.664055384" Dec 12 16:32:34 crc kubenswrapper[5116]: I1212 16:32:34.222809 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5fb7559989-wdl4t" podStartSLOduration=10.220303366 podStartE2EDuration="22.222784447s" podCreationTimestamp="2025-12-12 16:32:12 +0000 UTC" firstStartedPulling="2025-12-12 16:32:20.545122936 +0000 UTC m=+1035.009334692" lastFinishedPulling="2025-12-12 16:32:32.547604017 +0000 UTC m=+1047.011815773" observedRunningTime="2025-12-12 16:32:34.216424435 +0000 UTC m=+1048.680636181" watchObservedRunningTime="2025-12-12 16:32:34.222784447 +0000 UTC m=+1048.686996193" Dec 12 16:32:35 crc kubenswrapper[5116]: I1212 16:32:35.141724 5116 scope.go:117] "RemoveContainer" containerID="270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24" Dec 12 16:32:35 crc kubenswrapper[5116]: E1212 16:32:35.142677 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp_service-telemetry(25ee0738-c2d6-4454-9603-7e7f4b7d2e18)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" podUID="25ee0738-c2d6-4454-9603-7e7f4b7d2e18" Dec 12 16:32:35 crc kubenswrapper[5116]: I1212 16:32:35.144490 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" event={"ID":"5d5d9511-fa2b-4ece-9cb7-24c530042ec9","Type":"ContainerStarted","Data":"04a0e1a2a75f012be45faf9f0b2fe01cfb18c3c3eb7f2c303a82e8bc2a1b39d5"} Dec 12 16:32:35 crc kubenswrapper[5116]: I1212 16:32:35.196241 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-vz8fs" podStartSLOduration=4.211174164 podStartE2EDuration="42.196214997s" podCreationTimestamp="2025-12-12 16:31:53 +0000 UTC" firstStartedPulling="2025-12-12 16:31:56.107410262 +0000 UTC m=+1010.571622018" lastFinishedPulling="2025-12-12 16:32:34.092451095 +0000 UTC m=+1048.556662851" observedRunningTime="2025-12-12 16:32:35.193404871 +0000 UTC m=+1049.657616657" watchObservedRunningTime="2025-12-12 16:32:35.196214997 +0000 UTC m=+1049.660426753" Dec 12 16:32:42 crc kubenswrapper[5116]: I1212 16:32:42.557338 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35224: no serving certificate available for the kubelet" Dec 12 16:32:46 crc kubenswrapper[5116]: I1212 16:32:46.051758 5116 scope.go:117] "RemoveContainer" containerID="270e71170e7b97d742b9fd9c72679cb1bbdfd95c945b4b0f15664cc95aa2cd24" Dec 12 16:32:47 crc kubenswrapper[5116]: I1212 16:32:47.285213 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" event={"ID":"25ee0738-c2d6-4454-9603-7e7f4b7d2e18","Type":"ContainerStarted","Data":"fe99de7ea2454bd30f2a5357141d49b2e159133cad12bd7e27698794a564b9f0"} Dec 12 16:32:47 crc kubenswrapper[5116]: I1212 16:32:47.305067 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-qcxgp" podStartSLOduration=5.313387899 podStartE2EDuration="45.305046084s" podCreationTimestamp="2025-12-12 16:32:02 +0000 UTC" firstStartedPulling="2025-12-12 16:32:07.01314657 +0000 UTC m=+1021.477358326" lastFinishedPulling="2025-12-12 16:32:47.004804735 +0000 UTC m=+1061.469016511" observedRunningTime="2025-12-12 16:32:47.303655127 +0000 UTC m=+1061.767866893" watchObservedRunningTime="2025-12-12 16:32:47.305046084 +0000 UTC m=+1061.769257840" Dec 12 16:32:49 crc kubenswrapper[5116]: I1212 16:32:49.415768 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:49 crc kubenswrapper[5116]: I1212 16:32:49.415862 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:49 crc kubenswrapper[5116]: I1212 16:32:49.415940 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:32:49 crc kubenswrapper[5116]: I1212 16:32:49.416786 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:32:49 crc kubenswrapper[5116]: I1212 16:32:49.416843 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5" gracePeriod=600 Dec 12 16:32:53 crc kubenswrapper[5116]: I1212 16:32:53.858826 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5" exitCode=0 Dec 12 16:32:53 crc kubenswrapper[5116]: I1212 16:32:53.858954 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5"} Dec 12 16:32:53 crc kubenswrapper[5116]: I1212 16:32:53.859721 5116 scope.go:117] "RemoveContainer" containerID="85975e01cd9e5ce0c52a47772394bbc32f968256a73c3499bc14dec7e81dc5eb" Dec 12 16:32:54 crc kubenswrapper[5116]: I1212 16:32:54.870680 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27"} Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.067863 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.136200 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.136404 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.140163 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.140295 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.216313 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhkg\" (UniqueName: \"kubernetes.io/projected/73fa8e5f-622f-44b2-a549-2afccc0121da-kube-api-access-mmhkg\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.216847 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73fa8e5f-622f-44b2-a549-2afccc0121da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.217310 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73fa8e5f-622f-44b2-a549-2afccc0121da-qdr-test-config\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.318873 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73fa8e5f-622f-44b2-a549-2afccc0121da-qdr-test-config\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.319238 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhkg\" (UniqueName: \"kubernetes.io/projected/73fa8e5f-622f-44b2-a549-2afccc0121da-kube-api-access-mmhkg\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.319343 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73fa8e5f-622f-44b2-a549-2afccc0121da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.321223 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/73fa8e5f-622f-44b2-a549-2afccc0121da-qdr-test-config\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.330926 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/73fa8e5f-622f-44b2-a549-2afccc0121da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.343082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhkg\" (UniqueName: \"kubernetes.io/projected/73fa8e5f-622f-44b2-a549-2afccc0121da-kube-api-access-mmhkg\") pod \"qdr-test\" (UID: \"73fa8e5f-622f-44b2-a549-2afccc0121da\") " pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.455405 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.744699 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 16:32:57 crc kubenswrapper[5116]: I1212 16:32:57.897174 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"73fa8e5f-622f-44b2-a549-2afccc0121da","Type":"ContainerStarted","Data":"add6fff8e32038cd92b21d8d90b376442de91e09479d09cdf7bad0e8130e9c82"} Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.016070 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"73fa8e5f-622f-44b2-a549-2afccc0121da","Type":"ContainerStarted","Data":"d81e4c78cdfbea0f2180e225041398faa6a0ce765140a37aef2e2d2cecabcc8c"} Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.037136 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.494586481 podStartE2EDuration="13.037100327s" podCreationTimestamp="2025-12-12 16:32:57 +0000 UTC" firstStartedPulling="2025-12-12 16:32:57.737979763 +0000 UTC m=+1072.202191519" lastFinishedPulling="2025-12-12 16:33:09.280493599 +0000 UTC m=+1083.744705365" observedRunningTime="2025-12-12 16:33:10.035485374 +0000 UTC m=+1084.499697140" watchObservedRunningTime="2025-12-12 16:33:10.037100327 +0000 UTC m=+1084.501312083" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.393136 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zpkx6"] Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.430064 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zpkx6"] Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.430583 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.434884 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.435163 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.435819 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.436078 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.435844 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.435904 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.545651 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.545753 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.545874 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdkpq\" (UniqueName: \"kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.545929 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.546006 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.546044 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.546087 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.647823 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648267 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648495 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648613 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648736 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.648874 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bdkpq\" (UniqueName: \"kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.650701 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.651858 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.652124 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.652387 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.653117 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.653475 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.677382 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdkpq\" (UniqueName: \"kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq\") pod \"stf-smoketest-smoke1-zpkx6\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.754452 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.839020 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.849779 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.854896 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 16:33:10 crc kubenswrapper[5116]: I1212 16:33:10.955475 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4x6\" (UniqueName: \"kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6\") pod \"curl\" (UID: \"75de6d78-4896-42c9-a386-4acfa056603e\") " pod="service-telemetry/curl" Dec 12 16:33:11 crc kubenswrapper[5116]: I1212 16:33:11.058315 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4x6\" (UniqueName: \"kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6\") pod \"curl\" (UID: \"75de6d78-4896-42c9-a386-4acfa056603e\") " pod="service-telemetry/curl" Dec 12 16:33:11 crc kubenswrapper[5116]: I1212 16:33:11.064131 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zpkx6"] Dec 12 16:33:11 crc kubenswrapper[5116]: I1212 16:33:11.084833 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4x6\" (UniqueName: \"kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6\") pod \"curl\" (UID: \"75de6d78-4896-42c9-a386-4acfa056603e\") " pod="service-telemetry/curl" Dec 12 16:33:11 crc kubenswrapper[5116]: I1212 16:33:11.182451 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 16:33:11 crc kubenswrapper[5116]: I1212 16:33:11.434718 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 16:33:11 crc kubenswrapper[5116]: W1212 16:33:11.442629 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75de6d78_4896_42c9_a386_4acfa056603e.slice/crio-8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45 WatchSource:0}: Error finding container 8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45: Status 404 returned error can't find the container with id 8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45 Dec 12 16:33:12 crc kubenswrapper[5116]: I1212 16:33:12.037523 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"75de6d78-4896-42c9-a386-4acfa056603e","Type":"ContainerStarted","Data":"8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45"} Dec 12 16:33:12 crc kubenswrapper[5116]: I1212 16:33:12.039881 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerStarted","Data":"7cad4e119d18107e6027de4cbeec7cfc58debaa6393e33955a09e6fcee06a558"} Dec 12 16:33:23 crc kubenswrapper[5116]: I1212 16:33:23.556092 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52958: no serving certificate available for the kubelet" Dec 12 16:33:27 crc kubenswrapper[5116]: I1212 16:33:27.229351 5116 generic.go:358] "Generic (PLEG): container finished" podID="75de6d78-4896-42c9-a386-4acfa056603e" containerID="9035c2d72c5f2ba16638c5e8c572ad6bdf8710e2eaca6191e3b57e91123c4ef0" exitCode=0 Dec 12 16:33:27 crc kubenswrapper[5116]: I1212 16:33:27.230193 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"75de6d78-4896-42c9-a386-4acfa056603e","Type":"ContainerDied","Data":"9035c2d72c5f2ba16638c5e8c572ad6bdf8710e2eaca6191e3b57e91123c4ef0"} Dec 12 16:33:27 crc kubenswrapper[5116]: I1212 16:33:27.235307 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerStarted","Data":"a0368a0b96c15d4d057ff7b407f20588c840df0d4195d453bc2ee06efca46824"} Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.507253 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.617528 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf4x6\" (UniqueName: \"kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6\") pod \"75de6d78-4896-42c9-a386-4acfa056603e\" (UID: \"75de6d78-4896-42c9-a386-4acfa056603e\") " Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.624985 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6" (OuterVolumeSpecName: "kube-api-access-sf4x6") pod "75de6d78-4896-42c9-a386-4acfa056603e" (UID: "75de6d78-4896-42c9-a386-4acfa056603e"). InnerVolumeSpecName "kube-api-access-sf4x6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.679405 5116 ???:1] "http: TLS handshake error from 192.168.126.11:34830: no serving certificate available for the kubelet" Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.719455 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sf4x6\" (UniqueName: \"kubernetes.io/projected/75de6d78-4896-42c9-a386-4acfa056603e-kube-api-access-sf4x6\") on node \"crc\" DevicePath \"\"" Dec 12 16:33:28 crc kubenswrapper[5116]: I1212 16:33:28.961219 5116 ???:1] "http: TLS handshake error from 192.168.126.11:34836: no serving certificate available for the kubelet" Dec 12 16:33:29 crc kubenswrapper[5116]: I1212 16:33:29.252485 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"75de6d78-4896-42c9-a386-4acfa056603e","Type":"ContainerDied","Data":"8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45"} Dec 12 16:33:29 crc kubenswrapper[5116]: I1212 16:33:29.252536 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8637ca183127c7617d4d45a526d1ec6b9df01d50dc829df05c875d891e21aa45" Dec 12 16:33:29 crc kubenswrapper[5116]: I1212 16:33:29.252555 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 16:33:36 crc kubenswrapper[5116]: I1212 16:33:36.315725 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerStarted","Data":"a6c8c2a504d9e19e7aac5d2841ef1c94374cb48734e4bf3727d1ea6902477d0a"} Dec 12 16:33:36 crc kubenswrapper[5116]: I1212 16:33:36.339191 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" podStartSLOduration=1.8659104370000001 podStartE2EDuration="26.339165788s" podCreationTimestamp="2025-12-12 16:33:10 +0000 UTC" firstStartedPulling="2025-12-12 16:33:11.064946534 +0000 UTC m=+1085.529158290" lastFinishedPulling="2025-12-12 16:33:35.538201885 +0000 UTC m=+1110.002413641" observedRunningTime="2025-12-12 16:33:36.337707939 +0000 UTC m=+1110.801919695" watchObservedRunningTime="2025-12-12 16:33:36.339165788 +0000 UTC m=+1110.803377544" Dec 12 16:33:59 crc kubenswrapper[5116]: I1212 16:33:59.106668 5116 ???:1] "http: TLS handshake error from 192.168.126.11:55474: no serving certificate available for the kubelet" Dec 12 16:34:00 crc kubenswrapper[5116]: I1212 16:34:00.507668 5116 generic.go:358] "Generic (PLEG): container finished" podID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerID="a0368a0b96c15d4d057ff7b407f20588c840df0d4195d453bc2ee06efca46824" exitCode=0 Dec 12 16:34:00 crc kubenswrapper[5116]: I1212 16:34:00.507816 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerDied","Data":"a0368a0b96c15d4d057ff7b407f20588c840df0d4195d453bc2ee06efca46824"} Dec 12 16:34:00 crc kubenswrapper[5116]: I1212 16:34:00.508573 5116 scope.go:117] "RemoveContainer" containerID="a0368a0b96c15d4d057ff7b407f20588c840df0d4195d453bc2ee06efca46824" Dec 12 16:34:09 crc kubenswrapper[5116]: I1212 16:34:09.587754 5116 generic.go:358] "Generic (PLEG): container finished" podID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerID="a6c8c2a504d9e19e7aac5d2841ef1c94374cb48734e4bf3727d1ea6902477d0a" exitCode=0 Dec 12 16:34:09 crc kubenswrapper[5116]: I1212 16:34:09.588384 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerDied","Data":"a6c8c2a504d9e19e7aac5d2841ef1c94374cb48734e4bf3727d1ea6902477d0a"} Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.063021 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118276 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdkpq\" (UniqueName: \"kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118349 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118585 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118692 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118820 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.118841 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config\") pod \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\" (UID: \"3ff3272f-ba82-4dd0-8b72-108a3e9e192b\") " Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.127376 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq" (OuterVolumeSpecName: "kube-api-access-bdkpq") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "kube-api-access-bdkpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.140526 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.140927 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.141719 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.142705 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.142718 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.143286 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "3ff3272f-ba82-4dd0-8b72-108a3e9e192b" (UID: "3ff3272f-ba82-4dd0-8b72-108a3e9e192b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220013 5116 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220057 5116 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220069 5116 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220079 5116 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220089 5116 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220098 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bdkpq\" (UniqueName: \"kubernetes.io/projected/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-kube-api-access-bdkpq\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.220125 5116 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3ff3272f-ba82-4dd0-8b72-108a3e9e192b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.607999 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" event={"ID":"3ff3272f-ba82-4dd0-8b72-108a3e9e192b","Type":"ContainerDied","Data":"7cad4e119d18107e6027de4cbeec7cfc58debaa6393e33955a09e6fcee06a558"} Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.608465 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cad4e119d18107e6027de4cbeec7cfc58debaa6393e33955a09e6fcee06a558" Dec 12 16:34:11 crc kubenswrapper[5116]: I1212 16:34:11.608071 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zpkx6" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.058974 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060324 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="75de6d78-4896-42c9-a386-4acfa056603e" containerName="curl" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060344 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="75de6d78-4896-42c9-a386-4acfa056603e" containerName="curl" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060367 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-ceilometer" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060373 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-ceilometer" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060383 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-collectd" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060389 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-collectd" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060527 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-collectd" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060539 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="75de6d78-4896-42c9-a386-4acfa056603e" containerName="curl" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.060552 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ff3272f-ba82-4dd0-8b72-108a3e9e192b" containerName="smoketest-ceilometer" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.212808 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.213041 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.343189 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kxz\" (UniqueName: \"kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz\") pod \"infrawatch-operators-xsnh2\" (UID: \"02ad3f5e-2bb8-4000-8cb6-ce5713727098\") " pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.444863 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kxz\" (UniqueName: \"kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz\") pod \"infrawatch-operators-xsnh2\" (UID: \"02ad3f5e-2bb8-4000-8cb6-ce5713727098\") " pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.466540 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kxz\" (UniqueName: \"kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz\") pod \"infrawatch-operators-xsnh2\" (UID: \"02ad3f5e-2bb8-4000-8cb6-ce5713727098\") " pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:17 crc kubenswrapper[5116]: I1212 16:34:17.539455 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:18 crc kubenswrapper[5116]: I1212 16:34:18.090044 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:18 crc kubenswrapper[5116]: I1212 16:34:18.675768 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xsnh2" event={"ID":"02ad3f5e-2bb8-4000-8cb6-ce5713727098","Type":"ContainerStarted","Data":"f0f7a48d418ebfef263c9c820fa4786a39cdbe5d30d2d99935b4d1ebb959f316"} Dec 12 16:34:19 crc kubenswrapper[5116]: I1212 16:34:19.686433 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xsnh2" event={"ID":"02ad3f5e-2bb8-4000-8cb6-ce5713727098","Type":"ContainerStarted","Data":"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd"} Dec 12 16:34:19 crc kubenswrapper[5116]: I1212 16:34:19.711754 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-xsnh2" podStartSLOduration=2.127977504 podStartE2EDuration="2.711728964s" podCreationTimestamp="2025-12-12 16:34:17 +0000 UTC" firstStartedPulling="2025-12-12 16:34:18.096769357 +0000 UTC m=+1152.560981113" lastFinishedPulling="2025-12-12 16:34:18.680520817 +0000 UTC m=+1153.144732573" observedRunningTime="2025-12-12 16:34:19.707094549 +0000 UTC m=+1154.171306305" watchObservedRunningTime="2025-12-12 16:34:19.711728964 +0000 UTC m=+1154.175940720" Dec 12 16:34:27 crc kubenswrapper[5116]: I1212 16:34:27.540338 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:27 crc kubenswrapper[5116]: I1212 16:34:27.540917 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:27 crc kubenswrapper[5116]: I1212 16:34:27.567679 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:27 crc kubenswrapper[5116]: I1212 16:34:27.795503 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:29 crc kubenswrapper[5116]: I1212 16:34:29.292970 5116 ???:1] "http: TLS handshake error from 192.168.126.11:49108: no serving certificate available for the kubelet" Dec 12 16:34:29 crc kubenswrapper[5116]: I1212 16:34:29.846859 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:29 crc kubenswrapper[5116]: I1212 16:34:29.847447 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-xsnh2" podUID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" containerName="registry-server" containerID="cri-o://59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd" gracePeriod=2 Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.760158 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.794247 5116 generic.go:358] "Generic (PLEG): container finished" podID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" containerID="59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd" exitCode=0 Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.794314 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xsnh2" event={"ID":"02ad3f5e-2bb8-4000-8cb6-ce5713727098","Type":"ContainerDied","Data":"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd"} Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.794376 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xsnh2" event={"ID":"02ad3f5e-2bb8-4000-8cb6-ce5713727098","Type":"ContainerDied","Data":"f0f7a48d418ebfef263c9c820fa4786a39cdbe5d30d2d99935b4d1ebb959f316"} Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.794404 5116 scope.go:117] "RemoveContainer" containerID="59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.794833 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xsnh2" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.814895 5116 scope.go:117] "RemoveContainer" containerID="59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd" Dec 12 16:34:30 crc kubenswrapper[5116]: E1212 16:34:30.815550 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd\": container with ID starting with 59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd not found: ID does not exist" containerID="59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.815589 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd"} err="failed to get container status \"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd\": rpc error: code = NotFound desc = could not find container \"59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd\": container with ID starting with 59ea48357fd6a562eca4539b4c7c321f130da64dd726253dc5bb825e72484bfd not found: ID does not exist" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.883323 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5kxz\" (UniqueName: \"kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz\") pod \"02ad3f5e-2bb8-4000-8cb6-ce5713727098\" (UID: \"02ad3f5e-2bb8-4000-8cb6-ce5713727098\") " Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.891510 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz" (OuterVolumeSpecName: "kube-api-access-t5kxz") pod "02ad3f5e-2bb8-4000-8cb6-ce5713727098" (UID: "02ad3f5e-2bb8-4000-8cb6-ce5713727098"). InnerVolumeSpecName "kube-api-access-t5kxz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:34:30 crc kubenswrapper[5116]: I1212 16:34:30.985291 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5kxz\" (UniqueName: \"kubernetes.io/projected/02ad3f5e-2bb8-4000-8cb6-ce5713727098-kube-api-access-t5kxz\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:31 crc kubenswrapper[5116]: I1212 16:34:31.126551 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:31 crc kubenswrapper[5116]: I1212 16:34:31.132413 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-xsnh2"] Dec 12 16:34:32 crc kubenswrapper[5116]: I1212 16:34:32.054346 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" path="/var/lib/kubelet/pods/02ad3f5e-2bb8-4000-8cb6-ce5713727098/volumes" Dec 12 16:34:45 crc kubenswrapper[5116]: I1212 16:34:45.507843 5116 ???:1] "http: TLS handshake error from 192.168.126.11:51792: no serving certificate available for the kubelet" Dec 12 16:34:59 crc kubenswrapper[5116]: I1212 16:34:59.445893 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59396: no serving certificate available for the kubelet" Dec 12 16:35:18 crc kubenswrapper[5116]: I1212 16:35:18.514626 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:35:18 crc kubenswrapper[5116]: I1212 16:35:18.570029 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:35:18 crc kubenswrapper[5116]: I1212 16:35:18.590016 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:35:19 crc kubenswrapper[5116]: I1212 16:35:19.416194 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:35:19 crc kubenswrapper[5116]: I1212 16:35:19.416813 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:35:29 crc kubenswrapper[5116]: I1212 16:35:29.629535 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48568: no serving certificate available for the kubelet" Dec 12 16:35:37 crc kubenswrapper[5116]: I1212 16:35:37.473195 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:35:37 crc kubenswrapper[5116]: I1212 16:35:37.507738 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:35:37 crc kubenswrapper[5116]: I1212 16:35:37.525576 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:35:49 crc kubenswrapper[5116]: I1212 16:35:49.416512 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:35:49 crc kubenswrapper[5116]: I1212 16:35:49.417183 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:00 crc kubenswrapper[5116]: I1212 16:36:00.926179 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46552: no serving certificate available for the kubelet" Dec 12 16:36:01 crc kubenswrapper[5116]: I1212 16:36:01.226412 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46564: no serving certificate available for the kubelet" Dec 12 16:36:01 crc kubenswrapper[5116]: I1212 16:36:01.580799 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46568: no serving certificate available for the kubelet" Dec 12 16:36:01 crc kubenswrapper[5116]: I1212 16:36:01.871271 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46574: no serving certificate available for the kubelet" Dec 12 16:36:02 crc kubenswrapper[5116]: I1212 16:36:02.136437 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46576: no serving certificate available for the kubelet" Dec 12 16:36:02 crc kubenswrapper[5116]: I1212 16:36:02.479661 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46582: no serving certificate available for the kubelet" Dec 12 16:36:02 crc kubenswrapper[5116]: I1212 16:36:02.810091 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46596: no serving certificate available for the kubelet" Dec 12 16:36:03 crc kubenswrapper[5116]: I1212 16:36:03.089310 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46604: no serving certificate available for the kubelet" Dec 12 16:36:03 crc kubenswrapper[5116]: I1212 16:36:03.385655 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46618: no serving certificate available for the kubelet" Dec 12 16:36:03 crc kubenswrapper[5116]: I1212 16:36:03.643946 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46634: no serving certificate available for the kubelet" Dec 12 16:36:03 crc kubenswrapper[5116]: I1212 16:36:03.953249 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46646: no serving certificate available for the kubelet" Dec 12 16:36:04 crc kubenswrapper[5116]: I1212 16:36:04.321843 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46654: no serving certificate available for the kubelet" Dec 12 16:36:04 crc kubenswrapper[5116]: I1212 16:36:04.570055 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46668: no serving certificate available for the kubelet" Dec 12 16:36:04 crc kubenswrapper[5116]: I1212 16:36:04.895507 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46670: no serving certificate available for the kubelet" Dec 12 16:36:05 crc kubenswrapper[5116]: I1212 16:36:05.213747 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46678: no serving certificate available for the kubelet" Dec 12 16:36:05 crc kubenswrapper[5116]: I1212 16:36:05.508878 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46694: no serving certificate available for the kubelet" Dec 12 16:36:05 crc kubenswrapper[5116]: I1212 16:36:05.802038 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46708: no serving certificate available for the kubelet" Dec 12 16:36:06 crc kubenswrapper[5116]: I1212 16:36:06.106804 5116 ???:1] "http: TLS handshake error from 192.168.126.11:46716: no serving certificate available for the kubelet" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.287204 5116 ???:1] "http: TLS handshake error from 192.168.126.11:36924: no serving certificate available for the kubelet" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.416037 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.416232 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.416315 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.417476 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.417594 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27" gracePeriod=600 Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.547693 5116 ???:1] "http: TLS handshake error from 192.168.126.11:36940: no serving certificate available for the kubelet" Dec 12 16:36:19 crc kubenswrapper[5116]: I1212 16:36:19.808376 5116 ???:1] "http: TLS handshake error from 192.168.126.11:36942: no serving certificate available for the kubelet" Dec 12 16:36:21 crc kubenswrapper[5116]: I1212 16:36:21.154147 5116 ???:1] "http: TLS handshake error from 192.168.126.11:36956: no serving certificate available for the kubelet" Dec 12 16:36:28 crc kubenswrapper[5116]: I1212 16:36:28.066866 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-bb58t_8fedd19a-ed2a-4e65-a3ad-e104203261fe/machine-config-daemon/5.log" Dec 12 16:36:28 crc kubenswrapper[5116]: I1212 16:36:28.069810 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27" exitCode=-1 Dec 12 16:36:28 crc kubenswrapper[5116]: I1212 16:36:28.069902 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27"} Dec 12 16:36:28 crc kubenswrapper[5116]: I1212 16:36:28.070012 5116 scope.go:117] "RemoveContainer" containerID="6984eb907933f60d328ca599e81410bd76181c70d8e7532f77a0eff2370beae5" Dec 12 16:36:28 crc kubenswrapper[5116]: I1212 16:36:28.176151 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:36:29 crc kubenswrapper[5116]: I1212 16:36:29.081304 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"e4325b6dc3d5013355d658f7ab7f24472fe6dde112e4207a974f70099506c4d6"} Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.493556 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zssfd/must-gather-fjnpl"] Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.496336 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" containerName="registry-server" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.496446 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" containerName="registry-server" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.496654 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="02ad3f5e-2bb8-4000-8cb6-ce5713727098" containerName="registry-server" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.511307 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.512477 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zssfd/must-gather-fjnpl"] Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.516240 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-zssfd\"/\"openshift-service-ca.crt\"" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.516851 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-zssfd\"/\"kube-root-ca.crt\"" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.517047 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-zssfd\"/\"default-dockercfg-p8nzb\"" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.550381 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkxh\" (UniqueName: \"kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.550459 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.652035 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkxh\" (UniqueName: \"kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.652131 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.652623 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.675404 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkxh\" (UniqueName: \"kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh\") pod \"must-gather-fjnpl\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:00 crc kubenswrapper[5116]: I1212 16:37:00.858458 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:37:01 crc kubenswrapper[5116]: I1212 16:37:01.186782 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zssfd/must-gather-fjnpl"] Dec 12 16:37:01 crc kubenswrapper[5116]: I1212 16:37:01.357257 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zssfd/must-gather-fjnpl" event={"ID":"17ab58bd-df80-40ba-9ca2-2ca394fc5767","Type":"ContainerStarted","Data":"683983ac126408a8f2fe030eef715dd03ac2bbdc7c6ee100b2bf320b3cb55e1c"} Dec 12 16:37:07 crc kubenswrapper[5116]: I1212 16:37:07.419928 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zssfd/must-gather-fjnpl" event={"ID":"17ab58bd-df80-40ba-9ca2-2ca394fc5767","Type":"ContainerStarted","Data":"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55"} Dec 12 16:37:08 crc kubenswrapper[5116]: I1212 16:37:08.431981 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zssfd/must-gather-fjnpl" event={"ID":"17ab58bd-df80-40ba-9ca2-2ca394fc5767","Type":"ContainerStarted","Data":"39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e"} Dec 12 16:37:08 crc kubenswrapper[5116]: I1212 16:37:08.463438 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zssfd/must-gather-fjnpl" podStartSLOduration=2.595423424 podStartE2EDuration="8.463411614s" podCreationTimestamp="2025-12-12 16:37:00 +0000 UTC" firstStartedPulling="2025-12-12 16:37:01.208742237 +0000 UTC m=+1315.672953993" lastFinishedPulling="2025-12-12 16:37:07.076730407 +0000 UTC m=+1321.540942183" observedRunningTime="2025-12-12 16:37:08.459302033 +0000 UTC m=+1322.923513789" watchObservedRunningTime="2025-12-12 16:37:08.463411614 +0000 UTC m=+1322.927623380" Dec 12 16:37:10 crc kubenswrapper[5116]: I1212 16:37:10.484139 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47914: no serving certificate available for the kubelet" Dec 12 16:37:29 crc kubenswrapper[5116]: I1212 16:37:29.387532 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52800: no serving certificate available for the kubelet" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.158525 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.183817 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.184924 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.236913 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.237013 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.237227 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzml\" (UniqueName: \"kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.339318 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.339380 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.339475 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-chzml\" (UniqueName: \"kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.340054 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.340553 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.365873 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzml\" (UniqueName: \"kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml\") pod \"certified-operators-wfwfd\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.508023 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:42 crc kubenswrapper[5116]: I1212 16:37:42.830775 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:43 crc kubenswrapper[5116]: I1212 16:37:43.740142 5116 generic.go:358] "Generic (PLEG): container finished" podID="de887394-128a-42f7-9f0e-b1016afb01e7" containerID="d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9" exitCode=0 Dec 12 16:37:43 crc kubenswrapper[5116]: I1212 16:37:43.740208 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerDied","Data":"d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9"} Dec 12 16:37:43 crc kubenswrapper[5116]: I1212 16:37:43.740261 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerStarted","Data":"0a8741861c170108953ebbd19459dc3af3a1025eadedbb163de58c9fa0e04d95"} Dec 12 16:37:44 crc kubenswrapper[5116]: I1212 16:37:44.749977 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerStarted","Data":"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca"} Dec 12 16:37:45 crc kubenswrapper[5116]: I1212 16:37:45.764799 5116 generic.go:358] "Generic (PLEG): container finished" podID="de887394-128a-42f7-9f0e-b1016afb01e7" containerID="ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca" exitCode=0 Dec 12 16:37:45 crc kubenswrapper[5116]: I1212 16:37:45.764927 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerDied","Data":"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca"} Dec 12 16:37:46 crc kubenswrapper[5116]: I1212 16:37:46.782886 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerStarted","Data":"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb"} Dec 12 16:37:46 crc kubenswrapper[5116]: I1212 16:37:46.808132 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfwfd" podStartSLOduration=4.156262402 podStartE2EDuration="4.808084936s" podCreationTimestamp="2025-12-12 16:37:42 +0000 UTC" firstStartedPulling="2025-12-12 16:37:43.741586226 +0000 UTC m=+1358.205797982" lastFinishedPulling="2025-12-12 16:37:44.39340872 +0000 UTC m=+1358.857620516" observedRunningTime="2025-12-12 16:37:46.806493084 +0000 UTC m=+1361.270704900" watchObservedRunningTime="2025-12-12 16:37:46.808084936 +0000 UTC m=+1361.272296692" Dec 12 16:37:49 crc kubenswrapper[5116]: I1212 16:37:49.698047 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44680: no serving certificate available for the kubelet" Dec 12 16:37:49 crc kubenswrapper[5116]: I1212 16:37:49.871497 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44682: no serving certificate available for the kubelet" Dec 12 16:37:49 crc kubenswrapper[5116]: I1212 16:37:49.904410 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44684: no serving certificate available for the kubelet" Dec 12 16:37:52 crc kubenswrapper[5116]: I1212 16:37:52.508188 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:52 crc kubenswrapper[5116]: I1212 16:37:52.508254 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:52 crc kubenswrapper[5116]: I1212 16:37:52.550558 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:52 crc kubenswrapper[5116]: I1212 16:37:52.871069 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:52 crc kubenswrapper[5116]: I1212 16:37:52.926582 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:54 crc kubenswrapper[5116]: I1212 16:37:54.844410 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wfwfd" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="registry-server" containerID="cri-o://a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb" gracePeriod=2 Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.740057 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.759432 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities\") pod \"de887394-128a-42f7-9f0e-b1016afb01e7\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.759596 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chzml\" (UniqueName: \"kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml\") pod \"de887394-128a-42f7-9f0e-b1016afb01e7\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.759650 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content\") pod \"de887394-128a-42f7-9f0e-b1016afb01e7\" (UID: \"de887394-128a-42f7-9f0e-b1016afb01e7\") " Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.760698 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities" (OuterVolumeSpecName: "utilities") pod "de887394-128a-42f7-9f0e-b1016afb01e7" (UID: "de887394-128a-42f7-9f0e-b1016afb01e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.777813 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml" (OuterVolumeSpecName: "kube-api-access-chzml") pod "de887394-128a-42f7-9f0e-b1016afb01e7" (UID: "de887394-128a-42f7-9f0e-b1016afb01e7"). InnerVolumeSpecName "kube-api-access-chzml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.807516 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de887394-128a-42f7-9f0e-b1016afb01e7" (UID: "de887394-128a-42f7-9f0e-b1016afb01e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.853005 5116 generic.go:358] "Generic (PLEG): container finished" podID="de887394-128a-42f7-9f0e-b1016afb01e7" containerID="a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb" exitCode=0 Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.853055 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerDied","Data":"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb"} Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.853136 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfwfd" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.853152 5116 scope.go:117] "RemoveContainer" containerID="a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.853139 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfwfd" event={"ID":"de887394-128a-42f7-9f0e-b1016afb01e7","Type":"ContainerDied","Data":"0a8741861c170108953ebbd19459dc3af3a1025eadedbb163de58c9fa0e04d95"} Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.861241 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-chzml\" (UniqueName: \"kubernetes.io/projected/de887394-128a-42f7-9f0e-b1016afb01e7-kube-api-access-chzml\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.861275 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.861286 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de887394-128a-42f7-9f0e-b1016afb01e7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.880459 5116 scope.go:117] "RemoveContainer" containerID="ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.890523 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.895745 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wfwfd"] Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.918184 5116 scope.go:117] "RemoveContainer" containerID="d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.942326 5116 scope.go:117] "RemoveContainer" containerID="a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb" Dec 12 16:37:55 crc kubenswrapper[5116]: E1212 16:37:55.943165 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb\": container with ID starting with a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb not found: ID does not exist" containerID="a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.943226 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb"} err="failed to get container status \"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb\": rpc error: code = NotFound desc = could not find container \"a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb\": container with ID starting with a291130292cc05e8dccf8ecab882353385269d1eca87505992cb3a610a91fceb not found: ID does not exist" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.943256 5116 scope.go:117] "RemoveContainer" containerID="ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca" Dec 12 16:37:55 crc kubenswrapper[5116]: E1212 16:37:55.943785 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca\": container with ID starting with ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca not found: ID does not exist" containerID="ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.943847 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca"} err="failed to get container status \"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca\": rpc error: code = NotFound desc = could not find container \"ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca\": container with ID starting with ba475f706131b4776504f1ddfb423eeb537385021d415f3d1ede1704005554ca not found: ID does not exist" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.943888 5116 scope.go:117] "RemoveContainer" containerID="d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9" Dec 12 16:37:55 crc kubenswrapper[5116]: E1212 16:37:55.944338 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9\": container with ID starting with d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9 not found: ID does not exist" containerID="d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9" Dec 12 16:37:55 crc kubenswrapper[5116]: I1212 16:37:55.944386 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9"} err="failed to get container status \"d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9\": rpc error: code = NotFound desc = could not find container \"d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9\": container with ID starting with d70710c7119b31e3b8f7be79195f239756dbbc8c6192933ddafb5f9a342311e9 not found: ID does not exist" Dec 12 16:37:56 crc kubenswrapper[5116]: I1212 16:37:56.055540 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" path="/var/lib/kubelet/pods/de887394-128a-42f7-9f0e-b1016afb01e7/volumes" Dec 12 16:38:01 crc kubenswrapper[5116]: I1212 16:38:01.485536 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59676: no serving certificate available for the kubelet" Dec 12 16:38:01 crc kubenswrapper[5116]: I1212 16:38:01.631000 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59688: no serving certificate available for the kubelet" Dec 12 16:38:01 crc kubenswrapper[5116]: I1212 16:38:01.688333 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59700: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.074553 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40744: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.239670 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40760: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.246019 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40762: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.260851 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40768: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.480271 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40778: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.499484 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40790: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.514431 5116 ???:1] "http: TLS handshake error from 192.168.126.11:40796: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.647837 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32940: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.868123 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32946: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.878074 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32948: no serving certificate available for the kubelet" Dec 12 16:38:17 crc kubenswrapper[5116]: I1212 16:38:17.878453 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32950: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.083718 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32968: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.084075 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32958: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.102275 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32972: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.278577 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32974: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.436022 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32978: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.455444 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32986: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.455780 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32992: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.632060 5116 ???:1] "http: TLS handshake error from 192.168.126.11:32998: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.657496 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33012: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.658678 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33028: no serving certificate available for the kubelet" Dec 12 16:38:18 crc kubenswrapper[5116]: I1212 16:38:18.842227 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33044: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.009198 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33054: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.045796 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33058: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.082987 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33068: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.233877 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33074: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.244118 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33076: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.268002 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33082: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.424990 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33084: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.607597 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33090: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.662915 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33092: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.664023 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33096: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.825285 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33108: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.856439 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33112: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.881544 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33118: no serving certificate available for the kubelet" Dec 12 16:38:19 crc kubenswrapper[5116]: I1212 16:38:19.975543 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33134: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.135627 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33148: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.154653 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33158: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.180480 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33174: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.325494 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33180: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.366744 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33196: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.380154 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33212: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.412660 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33222: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.543927 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33224: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.735488 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33226: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.742077 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33232: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.746462 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33242: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.916375 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33244: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.918231 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33256: no serving certificate available for the kubelet" Dec 12 16:38:20 crc kubenswrapper[5116]: I1212 16:38:20.920382 5116 ???:1] "http: TLS handshake error from 192.168.126.11:33272: no serving certificate available for the kubelet" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.989065 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990084 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="extract-utilities" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990100 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="extract-utilities" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990137 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="registry-server" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990143 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="registry-server" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990159 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="extract-content" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990165 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="extract-content" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.990293 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="de887394-128a-42f7-9f0e-b1016afb01e7" containerName="registry-server" Dec 12 16:38:24 crc kubenswrapper[5116]: I1212 16:38:24.994427 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.009554 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.072482 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8p5\" (UniqueName: \"kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.072598 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.072644 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.174188 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8p5\" (UniqueName: \"kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.174506 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.174538 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.175135 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.175181 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.199362 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8p5\" (UniqueName: \"kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5\") pod \"community-operators-mh58z\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.317827 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:25 crc kubenswrapper[5116]: I1212 16:38:25.589231 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:26 crc kubenswrapper[5116]: I1212 16:38:26.083668 5116 generic.go:358] "Generic (PLEG): container finished" podID="346bfc49-6cd8-41c5-b929-c89320472500" containerID="30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67" exitCode=0 Dec 12 16:38:26 crc kubenswrapper[5116]: I1212 16:38:26.083789 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerDied","Data":"30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67"} Dec 12 16:38:26 crc kubenswrapper[5116]: I1212 16:38:26.083840 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerStarted","Data":"2244735052ffa239c635b8d90927721d8cebc994176bbdecf6c323d42ae31db9"} Dec 12 16:38:28 crc kubenswrapper[5116]: I1212 16:38:28.100522 5116 generic.go:358] "Generic (PLEG): container finished" podID="346bfc49-6cd8-41c5-b929-c89320472500" containerID="650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056" exitCode=0 Dec 12 16:38:28 crc kubenswrapper[5116]: I1212 16:38:28.100590 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerDied","Data":"650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056"} Dec 12 16:38:29 crc kubenswrapper[5116]: I1212 16:38:29.111040 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerStarted","Data":"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff"} Dec 12 16:38:29 crc kubenswrapper[5116]: I1212 16:38:29.141571 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mh58z" podStartSLOduration=3.709474382 podStartE2EDuration="5.141550522s" podCreationTimestamp="2025-12-12 16:38:24 +0000 UTC" firstStartedPulling="2025-12-12 16:38:26.084841145 +0000 UTC m=+1400.549052901" lastFinishedPulling="2025-12-12 16:38:27.516917275 +0000 UTC m=+1401.981129041" observedRunningTime="2025-12-12 16:38:29.134182184 +0000 UTC m=+1403.598393940" watchObservedRunningTime="2025-12-12 16:38:29.141550522 +0000 UTC m=+1403.605762278" Dec 12 16:38:33 crc kubenswrapper[5116]: I1212 16:38:33.508798 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42674: no serving certificate available for the kubelet" Dec 12 16:38:33 crc kubenswrapper[5116]: I1212 16:38:33.670008 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42678: no serving certificate available for the kubelet" Dec 12 16:38:33 crc kubenswrapper[5116]: I1212 16:38:33.736867 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42688: no serving certificate available for the kubelet" Dec 12 16:38:33 crc kubenswrapper[5116]: I1212 16:38:33.863664 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42700: no serving certificate available for the kubelet" Dec 12 16:38:33 crc kubenswrapper[5116]: I1212 16:38:33.938396 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42704: no serving certificate available for the kubelet" Dec 12 16:38:34 crc kubenswrapper[5116]: I1212 16:38:34.475239 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.846517 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.847188 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.846614 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.847303 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.847497 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.910595 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.952068 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.952468 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8x5\" (UniqueName: \"kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:35 crc kubenswrapper[5116]: I1212 16:38:35.952807 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.054780 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.054898 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.054971 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8x5\" (UniqueName: \"kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.055378 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.055500 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.078766 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8x5\" (UniqueName: \"kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5\") pod \"redhat-operators-qfbhk\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.173324 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.385287 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:36 crc kubenswrapper[5116]: I1212 16:38:36.664055 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:37 crc kubenswrapper[5116]: I1212 16:38:37.189301 5116 generic.go:358] "Generic (PLEG): container finished" podID="b6d7a0c4-070b-4254-872e-a941c268533c" containerID="cd964f3f4361fd48f41ebaf436129575b90544ed5256b1dbd199f847d781398e" exitCode=0 Dec 12 16:38:37 crc kubenswrapper[5116]: I1212 16:38:37.189439 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerDied","Data":"cd964f3f4361fd48f41ebaf436129575b90544ed5256b1dbd199f847d781398e"} Dec 12 16:38:37 crc kubenswrapper[5116]: I1212 16:38:37.189517 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerStarted","Data":"59ea8d82065c7c50e1ef212cb65b345b99720f9532223cdcc444236a3856c00e"} Dec 12 16:38:38 crc kubenswrapper[5116]: I1212 16:38:38.200141 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerStarted","Data":"f332d33b14bbf92e086a6b39844e2342ebc95a29c4dc4c7bb883d13c3264650e"} Dec 12 16:38:38 crc kubenswrapper[5116]: I1212 16:38:38.200949 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mh58z" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="registry-server" containerID="cri-o://468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff" gracePeriod=2 Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.116874 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.221385 5116 generic.go:358] "Generic (PLEG): container finished" podID="346bfc49-6cd8-41c5-b929-c89320472500" containerID="468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff" exitCode=0 Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.221474 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerDied","Data":"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff"} Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.221512 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh58z" event={"ID":"346bfc49-6cd8-41c5-b929-c89320472500","Type":"ContainerDied","Data":"2244735052ffa239c635b8d90927721d8cebc994176bbdecf6c323d42ae31db9"} Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.221533 5116 scope.go:117] "RemoveContainer" containerID="468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.221728 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh58z" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.222798 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8p5\" (UniqueName: \"kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5\") pod \"346bfc49-6cd8-41c5-b929-c89320472500\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.222856 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities\") pod \"346bfc49-6cd8-41c5-b929-c89320472500\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.222879 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content\") pod \"346bfc49-6cd8-41c5-b929-c89320472500\" (UID: \"346bfc49-6cd8-41c5-b929-c89320472500\") " Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.224930 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities" (OuterVolumeSpecName: "utilities") pod "346bfc49-6cd8-41c5-b929-c89320472500" (UID: "346bfc49-6cd8-41c5-b929-c89320472500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.245762 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5" (OuterVolumeSpecName: "kube-api-access-6q8p5") pod "346bfc49-6cd8-41c5-b929-c89320472500" (UID: "346bfc49-6cd8-41c5-b929-c89320472500"). InnerVolumeSpecName "kube-api-access-6q8p5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.259181 5116 generic.go:358] "Generic (PLEG): container finished" podID="b6d7a0c4-070b-4254-872e-a941c268533c" containerID="f332d33b14bbf92e086a6b39844e2342ebc95a29c4dc4c7bb883d13c3264650e" exitCode=0 Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.259518 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerDied","Data":"f332d33b14bbf92e086a6b39844e2342ebc95a29c4dc4c7bb883d13c3264650e"} Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.322320 5116 scope.go:117] "RemoveContainer" containerID="650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.324701 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6q8p5\" (UniqueName: \"kubernetes.io/projected/346bfc49-6cd8-41c5-b929-c89320472500-kube-api-access-6q8p5\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.324756 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.336235 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "346bfc49-6cd8-41c5-b929-c89320472500" (UID: "346bfc49-6cd8-41c5-b929-c89320472500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.351442 5116 scope.go:117] "RemoveContainer" containerID="30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.403508 5116 scope.go:117] "RemoveContainer" containerID="468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff" Dec 12 16:38:39 crc kubenswrapper[5116]: E1212 16:38:39.405066 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff\": container with ID starting with 468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff not found: ID does not exist" containerID="468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.405133 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff"} err="failed to get container status \"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff\": rpc error: code = NotFound desc = could not find container \"468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff\": container with ID starting with 468443b812f35a90ffb6e8a6c24c47d2c20f02a3c3eef9d7f8b270aa7f2660ff not found: ID does not exist" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.405160 5116 scope.go:117] "RemoveContainer" containerID="650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056" Dec 12 16:38:39 crc kubenswrapper[5116]: E1212 16:38:39.406488 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056\": container with ID starting with 650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056 not found: ID does not exist" containerID="650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.406515 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056"} err="failed to get container status \"650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056\": rpc error: code = NotFound desc = could not find container \"650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056\": container with ID starting with 650363d2e18c8eaaffda2ef677603f4707ea7f210a422444b7883a3e35406056 not found: ID does not exist" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.406530 5116 scope.go:117] "RemoveContainer" containerID="30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67" Dec 12 16:38:39 crc kubenswrapper[5116]: E1212 16:38:39.407024 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67\": container with ID starting with 30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67 not found: ID does not exist" containerID="30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.407059 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67"} err="failed to get container status \"30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67\": rpc error: code = NotFound desc = could not find container \"30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67\": container with ID starting with 30ccf966a59120e7b065ceecbc7642a0b410a228723431b420c66b4d8c2aaf67 not found: ID does not exist" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.425914 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346bfc49-6cd8-41c5-b929-c89320472500-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.560779 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:39 crc kubenswrapper[5116]: I1212 16:38:39.566880 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mh58z"] Dec 12 16:38:40 crc kubenswrapper[5116]: I1212 16:38:40.057464 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346bfc49-6cd8-41c5-b929-c89320472500" path="/var/lib/kubelet/pods/346bfc49-6cd8-41c5-b929-c89320472500/volumes" Dec 12 16:38:40 crc kubenswrapper[5116]: I1212 16:38:40.276064 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerStarted","Data":"e438a8b19fc9cf1541b6a9ddaa58e2a300bfbf0cd5d00648598f762a4ef0fcb1"} Dec 12 16:38:40 crc kubenswrapper[5116]: I1212 16:38:40.306710 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qfbhk" podStartSLOduration=5.591826677 podStartE2EDuration="6.306675879s" podCreationTimestamp="2025-12-12 16:38:34 +0000 UTC" firstStartedPulling="2025-12-12 16:38:37.19045959 +0000 UTC m=+1411.654671336" lastFinishedPulling="2025-12-12 16:38:37.905308782 +0000 UTC m=+1412.369520538" observedRunningTime="2025-12-12 16:38:40.298791707 +0000 UTC m=+1414.763003463" watchObservedRunningTime="2025-12-12 16:38:40.306675879 +0000 UTC m=+1414.770887635" Dec 12 16:38:46 crc kubenswrapper[5116]: I1212 16:38:46.174193 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:46 crc kubenswrapper[5116]: I1212 16:38:46.174525 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:46 crc kubenswrapper[5116]: I1212 16:38:46.214226 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:46 crc kubenswrapper[5116]: I1212 16:38:46.372989 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:46 crc kubenswrapper[5116]: I1212 16:38:46.448359 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:48 crc kubenswrapper[5116]: I1212 16:38:48.344569 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qfbhk" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="registry-server" containerID="cri-o://e438a8b19fc9cf1541b6a9ddaa58e2a300bfbf0cd5d00648598f762a4ef0fcb1" gracePeriod=2 Dec 12 16:38:49 crc kubenswrapper[5116]: I1212 16:38:49.416871 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:38:49 crc kubenswrapper[5116]: I1212 16:38:49.418129 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.510079 5116 generic.go:358] "Generic (PLEG): container finished" podID="b6d7a0c4-070b-4254-872e-a941c268533c" containerID="e438a8b19fc9cf1541b6a9ddaa58e2a300bfbf0cd5d00648598f762a4ef0fcb1" exitCode=0 Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.510341 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerDied","Data":"e438a8b19fc9cf1541b6a9ddaa58e2a300bfbf0cd5d00648598f762a4ef0fcb1"} Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.666163 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.788306 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc8x5\" (UniqueName: \"kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5\") pod \"b6d7a0c4-070b-4254-872e-a941c268533c\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.788599 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities\") pod \"b6d7a0c4-070b-4254-872e-a941c268533c\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.788774 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content\") pod \"b6d7a0c4-070b-4254-872e-a941c268533c\" (UID: \"b6d7a0c4-070b-4254-872e-a941c268533c\") " Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.790801 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities" (OuterVolumeSpecName: "utilities") pod "b6d7a0c4-070b-4254-872e-a941c268533c" (UID: "b6d7a0c4-070b-4254-872e-a941c268533c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.797792 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5" (OuterVolumeSpecName: "kube-api-access-fc8x5") pod "b6d7a0c4-070b-4254-872e-a941c268533c" (UID: "b6d7a0c4-070b-4254-872e-a941c268533c"). InnerVolumeSpecName "kube-api-access-fc8x5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.896289 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d7a0c4-070b-4254-872e-a941c268533c" (UID: "b6d7a0c4-070b-4254-872e-a941c268533c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.899311 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.899337 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d7a0c4-070b-4254-872e-a941c268533c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:53 crc kubenswrapper[5116]: I1212 16:38:53.899351 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fc8x5\" (UniqueName: \"kubernetes.io/projected/b6d7a0c4-070b-4254-872e-a941c268533c-kube-api-access-fc8x5\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.525379 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfbhk" event={"ID":"b6d7a0c4-070b-4254-872e-a941c268533c","Type":"ContainerDied","Data":"59ea8d82065c7c50e1ef212cb65b345b99720f9532223cdcc444236a3856c00e"} Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.525423 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfbhk" Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.525486 5116 scope.go:117] "RemoveContainer" containerID="e438a8b19fc9cf1541b6a9ddaa58e2a300bfbf0cd5d00648598f762a4ef0fcb1" Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.562809 5116 scope.go:117] "RemoveContainer" containerID="f332d33b14bbf92e086a6b39844e2342ebc95a29c4dc4c7bb883d13c3264650e" Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.567090 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.573887 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qfbhk"] Dec 12 16:38:54 crc kubenswrapper[5116]: I1212 16:38:54.590488 5116 scope.go:117] "RemoveContainer" containerID="cd964f3f4361fd48f41ebaf436129575b90544ed5256b1dbd199f847d781398e" Dec 12 16:38:56 crc kubenswrapper[5116]: I1212 16:38:56.062296 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" path="/var/lib/kubelet/pods/b6d7a0c4-070b-4254-872e-a941c268533c/volumes" Dec 12 16:39:17 crc kubenswrapper[5116]: I1212 16:39:17.758505 5116 generic.go:358] "Generic (PLEG): container finished" podID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerID="756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55" exitCode=0 Dec 12 16:39:17 crc kubenswrapper[5116]: I1212 16:39:17.758621 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zssfd/must-gather-fjnpl" event={"ID":"17ab58bd-df80-40ba-9ca2-2ca394fc5767","Type":"ContainerDied","Data":"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55"} Dec 12 16:39:17 crc kubenswrapper[5116]: I1212 16:39:17.760489 5116 scope.go:117] "RemoveContainer" containerID="756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55" Dec 12 16:39:19 crc kubenswrapper[5116]: I1212 16:39:19.416287 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:39:19 crc kubenswrapper[5116]: I1212 16:39:19.416622 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.504232 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45260: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.672882 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45272: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.684315 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45286: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.706056 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45300: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.717411 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45312: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.731048 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45320: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.741682 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45328: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.757094 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45342: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.769975 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45344: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.916820 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45352: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.935643 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45358: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.957818 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45364: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.971842 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45374: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.983676 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45388: no serving certificate available for the kubelet" Dec 12 16:39:20 crc kubenswrapper[5116]: I1212 16:39:20.996954 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45392: no serving certificate available for the kubelet" Dec 12 16:39:21 crc kubenswrapper[5116]: I1212 16:39:21.009512 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45402: no serving certificate available for the kubelet" Dec 12 16:39:21 crc kubenswrapper[5116]: I1212 16:39:21.018383 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45414: no serving certificate available for the kubelet" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.167099 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zssfd/must-gather-fjnpl"] Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.168305 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-zssfd/must-gather-fjnpl" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="copy" containerID="cri-o://39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e" gracePeriod=2 Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.170471 5116 status_manager.go:895] "Failed to get status for pod" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" pod="openshift-must-gather-zssfd/must-gather-fjnpl" err="pods \"must-gather-fjnpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zssfd\": no relationship found between node 'crc' and this object" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.175290 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zssfd/must-gather-fjnpl"] Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.567740 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zssfd_must-gather-fjnpl_17ab58bd-df80-40ba-9ca2-2ca394fc5767/copy/0.log" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.568559 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.570152 5116 status_manager.go:895] "Failed to get status for pod" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" pod="openshift-must-gather-zssfd/must-gather-fjnpl" err="pods \"must-gather-fjnpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zssfd\": no relationship found between node 'crc' and this object" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.667239 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkxh\" (UniqueName: \"kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh\") pod \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.667443 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output\") pod \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\" (UID: \"17ab58bd-df80-40ba-9ca2-2ca394fc5767\") " Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.676325 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh" (OuterVolumeSpecName: "kube-api-access-qfkxh") pod "17ab58bd-df80-40ba-9ca2-2ca394fc5767" (UID: "17ab58bd-df80-40ba-9ca2-2ca394fc5767"). InnerVolumeSpecName "kube-api-access-qfkxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.719158 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "17ab58bd-df80-40ba-9ca2-2ca394fc5767" (UID: "17ab58bd-df80-40ba-9ca2-2ca394fc5767"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.768957 5116 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17ab58bd-df80-40ba-9ca2-2ca394fc5767-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.769000 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfkxh\" (UniqueName: \"kubernetes.io/projected/17ab58bd-df80-40ba-9ca2-2ca394fc5767-kube-api-access-qfkxh\") on node \"crc\" DevicePath \"\"" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.837279 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zssfd_must-gather-fjnpl_17ab58bd-df80-40ba-9ca2-2ca394fc5767/copy/0.log" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.838009 5116 generic.go:358] "Generic (PLEG): container finished" podID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerID="39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e" exitCode=143 Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.838128 5116 scope.go:117] "RemoveContainer" containerID="39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.838136 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zssfd/must-gather-fjnpl" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.840795 5116 status_manager.go:895] "Failed to get status for pod" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" pod="openshift-must-gather-zssfd/must-gather-fjnpl" err="pods \"must-gather-fjnpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zssfd\": no relationship found between node 'crc' and this object" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.859066 5116 status_manager.go:895] "Failed to get status for pod" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" pod="openshift-must-gather-zssfd/must-gather-fjnpl" err="pods \"must-gather-fjnpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zssfd\": no relationship found between node 'crc' and this object" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.861838 5116 scope.go:117] "RemoveContainer" containerID="756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.936163 5116 scope.go:117] "RemoveContainer" containerID="39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e" Dec 12 16:39:26 crc kubenswrapper[5116]: E1212 16:39:26.936731 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e\": container with ID starting with 39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e not found: ID does not exist" containerID="39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.936783 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e"} err="failed to get container status \"39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e\": rpc error: code = NotFound desc = could not find container \"39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e\": container with ID starting with 39dc329fd691a026f5f54f809ccf2bb82e8d0ad7fc4f79db3a8e669be16cb14e not found: ID does not exist" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.936814 5116 scope.go:117] "RemoveContainer" containerID="756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55" Dec 12 16:39:26 crc kubenswrapper[5116]: E1212 16:39:26.937147 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55\": container with ID starting with 756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55 not found: ID does not exist" containerID="756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55" Dec 12 16:39:26 crc kubenswrapper[5116]: I1212 16:39:26.937192 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55"} err="failed to get container status \"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55\": rpc error: code = NotFound desc = could not find container \"756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55\": container with ID starting with 756169a15a601f5a8c2c0b254419896fd2b08aad0536bac6d4aef6c214686c55 not found: ID does not exist" Dec 12 16:39:28 crc kubenswrapper[5116]: I1212 16:39:28.054061 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" path="/var/lib/kubelet/pods/17ab58bd-df80-40ba-9ca2-2ca394fc5767/volumes" Dec 12 16:39:49 crc kubenswrapper[5116]: I1212 16:39:49.416603 5116 patch_prober.go:28] interesting pod/machine-config-daemon-bb58t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:39:49 crc kubenswrapper[5116]: I1212 16:39:49.417198 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:39:49 crc kubenswrapper[5116]: I1212 16:39:49.417257 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" Dec 12 16:39:49 crc kubenswrapper[5116]: I1212 16:39:49.417988 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4325b6dc3d5013355d658f7ab7f24472fe6dde112e4207a974f70099506c4d6"} pod="openshift-machine-config-operator/machine-config-daemon-bb58t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:39:49 crc kubenswrapper[5116]: I1212 16:39:49.418051 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" podUID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerName="machine-config-daemon" containerID="cri-o://e4325b6dc3d5013355d658f7ab7f24472fe6dde112e4207a974f70099506c4d6" gracePeriod=600 Dec 12 16:39:50 crc kubenswrapper[5116]: I1212 16:39:50.066560 5116 generic.go:358] "Generic (PLEG): container finished" podID="8fedd19a-ed2a-4e65-a3ad-e104203261fe" containerID="e4325b6dc3d5013355d658f7ab7f24472fe6dde112e4207a974f70099506c4d6" exitCode=0 Dec 12 16:39:50 crc kubenswrapper[5116]: I1212 16:39:50.066648 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerDied","Data":"e4325b6dc3d5013355d658f7ab7f24472fe6dde112e4207a974f70099506c4d6"} Dec 12 16:39:50 crc kubenswrapper[5116]: I1212 16:39:50.067736 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bb58t" event={"ID":"8fedd19a-ed2a-4e65-a3ad-e104203261fe","Type":"ContainerStarted","Data":"6d980d256850193acc752b1b85db476e4d1b2179cc15c768607e5af12a1ce734"} Dec 12 16:39:50 crc kubenswrapper[5116]: I1212 16:39:50.067765 5116 scope.go:117] "RemoveContainer" containerID="7c8bd92412771ff512e33cbee9cb6403cfdd2288cceb73946bb3fd16bc6d5c27" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.097946 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099216 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="copy" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099232 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="copy" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099249 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099257 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099269 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="gather" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099276 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="gather" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099287 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="extract-content" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099292 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="extract-content" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099301 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="extract-content" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099306 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="extract-content" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099313 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="extract-utilities" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099319 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="extract-utilities" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099330 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099335 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099351 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="extract-utilities" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099356 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="extract-utilities" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099461 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="346bfc49-6cd8-41c5-b929-c89320472500" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099474 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6d7a0c4-070b-4254-872e-a941c268533c" containerName="registry-server" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099487 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="copy" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.099495 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17ab58bd-df80-40ba-9ca2-2ca394fc5767" containerName="gather" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.108918 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.114808 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.158041 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4d9\" (UniqueName: \"kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9\") pod \"infrawatch-operators-zfh7r\" (UID: \"9dfaab4a-c3db-4537-ac73-91371784df77\") " pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.259485 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4d9\" (UniqueName: \"kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9\") pod \"infrawatch-operators-zfh7r\" (UID: \"9dfaab4a-c3db-4537-ac73-91371784df77\") " pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.285062 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4d9\" (UniqueName: \"kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9\") pod \"infrawatch-operators-zfh7r\" (UID: \"9dfaab4a-c3db-4537-ac73-91371784df77\") " pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.480046 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:11 crc kubenswrapper[5116]: I1212 16:40:11.929357 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:12 crc kubenswrapper[5116]: I1212 16:40:12.271972 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zfh7r" event={"ID":"9dfaab4a-c3db-4537-ac73-91371784df77","Type":"ContainerStarted","Data":"8ae842ec3ec934679ecc6e1299bb3686bdcd2e029f91a02119af8261cc787448"} Dec 12 16:40:14 crc kubenswrapper[5116]: I1212 16:40:14.286524 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zfh7r" event={"ID":"9dfaab4a-c3db-4537-ac73-91371784df77","Type":"ContainerStarted","Data":"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1"} Dec 12 16:40:14 crc kubenswrapper[5116]: I1212 16:40:14.317256 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-zfh7r" podStartSLOduration=1.4460995859999999 podStartE2EDuration="3.317237396s" podCreationTimestamp="2025-12-12 16:40:11 +0000 UTC" firstStartedPulling="2025-12-12 16:40:11.935772194 +0000 UTC m=+1506.399983950" lastFinishedPulling="2025-12-12 16:40:13.806910004 +0000 UTC m=+1508.271121760" observedRunningTime="2025-12-12 16:40:14.31294388 +0000 UTC m=+1508.777155646" watchObservedRunningTime="2025-12-12 16:40:14.317237396 +0000 UTC m=+1508.781449152" Dec 12 16:40:18 crc kubenswrapper[5116]: I1212 16:40:18.661176 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-8d6d7544-9gbxz_28f074df-1514-4cf8-8765-5e3523342f2e/oauth-openshift/0.log" Dec 12 16:40:18 crc kubenswrapper[5116]: I1212 16:40:18.699602 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bphkq_0e71d710-0829-4655-b88f-9318b9776228/kube-multus/0.log" Dec 12 16:40:18 crc kubenswrapper[5116]: I1212 16:40:18.723269 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:40:21 crc kubenswrapper[5116]: I1212 16:40:21.480392 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:21 crc kubenswrapper[5116]: I1212 16:40:21.480883 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:21 crc kubenswrapper[5116]: I1212 16:40:21.507981 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:22 crc kubenswrapper[5116]: I1212 16:40:22.409908 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:22 crc kubenswrapper[5116]: I1212 16:40:22.450814 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:24 crc kubenswrapper[5116]: I1212 16:40:24.361992 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-zfh7r" podUID="9dfaab4a-c3db-4537-ac73-91371784df77" containerName="registry-server" containerID="cri-o://c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1" gracePeriod=2 Dec 12 16:40:24 crc kubenswrapper[5116]: I1212 16:40:24.795921 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:24 crc kubenswrapper[5116]: I1212 16:40:24.802792 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq4d9\" (UniqueName: \"kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9\") pod \"9dfaab4a-c3db-4537-ac73-91371784df77\" (UID: \"9dfaab4a-c3db-4537-ac73-91371784df77\") " Dec 12 16:40:24 crc kubenswrapper[5116]: I1212 16:40:24.811418 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9" (OuterVolumeSpecName: "kube-api-access-qq4d9") pod "9dfaab4a-c3db-4537-ac73-91371784df77" (UID: "9dfaab4a-c3db-4537-ac73-91371784df77"). InnerVolumeSpecName "kube-api-access-qq4d9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:40:24 crc kubenswrapper[5116]: I1212 16:40:24.904725 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qq4d9\" (UniqueName: \"kubernetes.io/projected/9dfaab4a-c3db-4537-ac73-91371784df77-kube-api-access-qq4d9\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.370237 5116 generic.go:358] "Generic (PLEG): container finished" podID="9dfaab4a-c3db-4537-ac73-91371784df77" containerID="c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1" exitCode=0 Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.370306 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zfh7r" event={"ID":"9dfaab4a-c3db-4537-ac73-91371784df77","Type":"ContainerDied","Data":"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1"} Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.370373 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zfh7r" Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.370397 5116 scope.go:117] "RemoveContainer" containerID="c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1" Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.370383 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zfh7r" event={"ID":"9dfaab4a-c3db-4537-ac73-91371784df77","Type":"ContainerDied","Data":"8ae842ec3ec934679ecc6e1299bb3686bdcd2e029f91a02119af8261cc787448"} Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.389838 5116 scope.go:117] "RemoveContainer" containerID="c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1" Dec 12 16:40:25 crc kubenswrapper[5116]: E1212 16:40:25.390295 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1\": container with ID starting with c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1 not found: ID does not exist" containerID="c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1" Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.390344 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1"} err="failed to get container status \"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1\": rpc error: code = NotFound desc = could not find container \"c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1\": container with ID starting with c2b9106bc8f97c87f81e533a9b2147b74b581674e90fad99637ab76fd61f58e1 not found: ID does not exist" Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.410065 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:25 crc kubenswrapper[5116]: I1212 16:40:25.418282 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-zfh7r"] Dec 12 16:40:26 crc kubenswrapper[5116]: I1212 16:40:26.054890 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfaab4a-c3db-4537-ac73-91371784df77" path="/var/lib/kubelet/pods/9dfaab4a-c3db-4537-ac73-91371784df77/volumes"